AI coding agents using grep/ripgrep waste thousands of tokens and context on false positives. CodeGrok MCP uses AST-based semantic search with local vector embeddingsAI coding agents using grep/ripgrep waste thousands of tokens and context on false positives. CodeGrok MCP uses AST-based semantic search with local vector embeddings

CodeGrok MCP: Semantic Code Search That Saves AI Agents 10x in Context Usage

When you ask Claude Code, Cursor, or Windsurf "how does authentication work in this project?", here's what actually happens behind the scenes:

$ grep -r "authentication" src/ src/auth/login.py:42:def verify_user(username, password): src/models.py:10:user_email = "user@example.com" src/config.py:5:# authentication settings src/utils.py:150:verify_user_input() ... 30+ more results, mostly noise

The agent then reads entire files to understand context. For a 10,000-file codebase, this means burning thousands of tokens and context per query tokens that could be answering your actual question.

I built CodeGrok MCP to fix this.

What CodeGrok Actually Does

CodeGrok MCP takes a fundamentally different approach: AST-based semantic indexing that runs entirely on your machine. No cloud. No API calls. Your code never leaves your device.

Instead of searching text, CodeGrok parses code into Abstract Syntax Trees using Tree-sitter. It extracts semantic symbols functions, classes, methods, variables from 9 languages and 30+ file extensions:

  • Python (.py, .pyi, .pyw)
  • JavaScript (.js, .jsx, .mjs, .cjs)
  • TypeScript (.ts, .tsx, .mts, .cts)
  • C/C++ (.c, .cpp, .h, .hpp)
  • Go, Java, Kotlin, Bash

Each symbol becomes a single chunk with rich metadata. Not arbitrary line splits. Not entire files. Just the code you need.

The Embedding Pipeline

Here's where it gets interesting. CodeGrok uses nomic-ai/CodeRankEmbed a model specifically trained for code retrieval to generate 768-dimensional vectors for each symbol:

'coderankembed': { 'hf_name': 'nomic-ai/CodeRankEmbed', 'dimensions': 768, 'max_seq_length': 8192, 'query_prefix': 'Represent this query for searching relevant code: ', }

Performance characteristics:

  • ~50 embeddings/second on CPU (faster with GPU)
  • LRU cache with 1000 entries for repeated queries
  • Incremental reindexing via mtime comparison only changed files get re-processed

Each symbol gets formatted with everything an AI agent needs:

# src/auth/login.py:42 function: verify_user def verify_user(username: str, password: str) -> bool: Verifies user credentials against the database. def verify_user(username: str, password: str) -> bool: user = db.query(User).filter_by(username=username).first() return check_password(password, user.password_hash) Imports: db, check_password Calls: db.query, check_password

File location, symbol type, signature, docstring, implementation, and dependencies all in one indexed chunk.

How AI Agents Connect

CodeGrok exposes semantic search through the Model Context Protocol (MCP). If you're using Claude Desktop, Cursor, or any MCP-compatible client, integration is straightforward.

Four tools handle everything:

| Tool | Purpose | |----|----| | learn | Index a codebase (auto/full/load_only modes) | | get_sources | Semantic search with language/symbol filters | | get_stats | Return index statistics | | list_supported_languages | List supported languages |

The get_sources tool is where the magic happens:

@mcp.tool(name="get_sources") def get_sources( question: str, # "How does user authentication work?" n_results: int = 10, # Top-k results language: str = None, # Filter: "python", "javascript" symbol_type: str = None # Filter: "function", "class", "method" ) -> Dict[str, Any]:

Query "How does authentication work?" and get:

  • src/auth/login.py:42 - verify_user()
  • src/auth/mfa.py:78 - validate_mfa_token()

No comment matches. No string literals. No config files mentioning the word "authentication." Just the functions that actually handle authentication.

The Numbers That Matter

| Aspect | Grep | CodeGrok MCP | |----|----|----| | Matching | Keyword/regex | Semantic similarity | | False positives | High | Very low | | Synonyms | ❌ "authenticate" ≠ "verify" | ✅ Understands intent | | Metadata | None | Line #, signature, type, language | | Token usage | Read entire files | Returns exact functions | | Persistence | Scan every time | Pre-indexed, instant search |

For enterprises, this means code stays on-premises. For solo developers, it means no API keys, no subscriptions, and it works offline after the initial model download.

Getting Started

pip install codegrok-mcp codegrok-mcp # Starts MCP server on stdio

Configure your MCP client to connect. Then:

  1. learn your codebase
  2. get_sources with natural language queries
  3. Get precise code references instead of grep noise

Embeddings persist in .codegrok/ within your project directory. Subsequent indexes are near-instant because only changed files get re-processed.

GitHub: github.com/dondetir/CodeGrok_mcp


I'm a Engineer who builds open-source AI tools through DS APPS Inc. CodeGrok MCP came from frustration with watching AI agents burn context windows on irrelevant grep results. The source is MIT licensed contributions welcome.

\

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.04322
$0.04322$0.04322
-0.93%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Sensura to Showcase Non-Invasive Health Monitoring Platform, Starting with Glucose, at CES 2026

Sensura to Showcase Non-Invasive Health Monitoring Platform, Starting with Glucose, at CES 2026

LAS VEGAS, Jan. 6, 2026 /PRNewswire/ — Sensura, a Singapore-based deep-tech company focused on next-generation health and wellness monitoring, today announced that
Share
AI Journal2026/01/07 11:30
Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

The post Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC appeared on BitcoinEthereumNews.com. Franklin Templeton CEO Jenny Johnson has weighed in on whether the Federal Reserve should make a 25 basis points (bps) Fed rate cut or 50 bps cut. This comes ahead of the Fed decision today at today’s FOMC meeting, with the market pricing in a 25 bps cut. Bitcoin and the broader crypto market are currently trading flat ahead of the rate cut decision. Franklin Templeton CEO Weighs In On Potential FOMC Decision In a CNBC interview, Jenny Johnson said that she expects the Fed to make a 25 bps cut today instead of a 50 bps cut. She acknowledged the jobs data, which suggested that the labor market is weakening. However, she noted that this data is backward-looking, indicating that it doesn’t show the current state of the economy. She alluded to the wage growth, which she remarked is an indication of a robust labor market. She added that retail sales are up and that consumers are still spending, despite inflation being sticky at 3%, which makes a case for why the FOMC should opt against a 50-basis-point Fed rate cut. In line with this, the Franklin Templeton CEO said that she would go with a 25 bps rate cut if she were Jerome Powell. She remarked that the Fed still has the October and December FOMC meetings to make further cuts if the incoming data warrants it. Johnson also asserted that the data show a robust economy. However, she noted that there can’t be an argument for no Fed rate cut since Powell already signaled at Jackson Hole that they were likely to lower interest rates at this meeting due to concerns over a weakening labor market. Notably, her comment comes as experts argue for both sides on why the Fed should make a 25 bps cut or…
Share
BitcoinEthereumNews2025/09/18 00:36
Kelun-Biotech to Attend the 44th J.P. Morgan Healthcare Conference, Sharing Its Business Progresses and Innovation Strategies

Kelun-Biotech to Attend the 44th J.P. Morgan Healthcare Conference, Sharing Its Business Progresses and Innovation Strategies

CHENGDU, China, Jan. 6, 2026 /PRNewswire/ — The 44th J.P. Morgan Healthcare Conference (JPMHC) will be held in San Francisco, California, USA, from January 12 to
Share
AI Journal2026/01/07 11:15