by vitali87 · MCP Server · ★ 2.3k
Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h
<img src="https://static.pepy
| Stars | 2,285 |
| Forks | 377 |
| Language | Python |
| Category | MCP Server |
| License | MIT |
| Quality Score | 55.348/100 |
| Open Issues | 50 |
| Last Updated | 2026-03-28 |
| Created | 2025-06-16 |
| Platforms | claude-code, mcp, python |
| Est. Tokens | ~4042k |
These tools work well together with code-graph-rag for enhanced workflows:
Looking for a code-graph-rag alternative? If you're comparing code-graph-rag with other mcp server tools, these 6 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.
High-performance code intelligence MCP server. Indexes codebases into a persistent knowledge graph — average r
SimpleMem: Efficient Lifelong Memory for LLM Agents — Text & Multimodal
A bio-inspired cognitive memory engine — a new paradigm for Graph RAG.
Graph-powered code intelligence engine — indexes codebases into a knowledge graph, exposed via MCP tools for A
Architectural intelligence layer for AI coding agents. Structural graph, architecture governance, multi-agent
Klavis AI: MCP integration platforms that let AI agents use tools reliably at any scale
Explore other popular mcp server tools:
code-graph-rag is The ultimate RAG for your monorepo. Query, understand, and edit multi-language codebases with the power of AI and knowledge graphs. It is categorized as a MCP Server with 2.3k GitHub stars.
code-graph-rag is primarily written in Python. It covers topics such as ai, ast, claude-code.
You can find installation instructions and usage details in the code-graph-rag GitHub repository at github.com/vitali87/code-graph-rag. The project has 2.3k stars and 377 forks, indicating an active community.
code-graph-rag is released under the MIT license, making it free to use and modify according to the license terms.
The top alternatives to code-graph-rag on Agent Skills Hub include codebase-memory-mcp, SimpleMem, m_flow. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.