jqassistant-graph-rag
Source code graph RAG (graphRAG) for Java/Kotlin development based on jQAssistant
claude mcp add --transport stdio 2015xli-jqassistant-graph-rag python main.py \ --env LLM_API="optional: openai|deepseek|ollama|fake" \ --env NEO4J_URI="bolt://localhost:7687" \ --env NEO4J_PASSWORD="password" \ --env NEO4J_USERNAME="neo4j"
How to use
This MCP server provides a Python-based graph enrichment and RAG (retrieval-augmented generation) workflow for a jQAssistant-derived Neo4j graph. It enriches the compiled Java/Kotlin graph with source structure and then uses an LLM to generate multi-level, context-aware summaries for methods, types, files, packages, and the entire project. You can run the script in two modes: (1) enrichment only, which adds semantic properties without querying an LLM, and (2) enrichment plus summary generation, which produces AI-ready descriptions suitable for agents and documentation. To use it, start by ensuring you have a running Neo4j instance populated with a graph generated by jQAssistant, then run the Python script to perform enrichment and optional summarization. You can specify which LLM to use (openai, deepseek, ollama, or a fake tester) and can disable actual API calls by using the fake option for testing.
How to install
Prerequisites:
- Python 3.13 or higher
- A running Neo4j instance with a graph generated by jQAssistant
- pip and virtual environment support
Step-by-step installation:
- Clone or download the repository containing main.py and requirements.txt
- Create and activate a Python virtual environment:
- python3 -m venv venv
- source venv/bin/activate (Linux/macOS) or venv\Scripts\activate (Windows)
- Install dependencies:
- pip install -r requirements.txt
- Configure environment variables (optional but recommended) and ensure Neo4j credentials are accessible:
- Set NE O4J_URI, NE O4J_USERNAME, NE O4J_PASSWORD as needed
- Run the script in enrichment mode (no summary generation):
- python main.py
- To run with summary generation (using an LLM):
- python main.py --generate-summary --llm-api <openai|deepseek|ollama|fake>
Notes:
- If you want to test without making real LLM calls, use --llm-api fake or enable a local Ollama instance if you have one running.
Additional notes
Tips and common considerations:
- Ensure the jQAssistant-generated graph is accessible by Neo4j and that the bolt connection is correctly configured.
- The tool supports partial runs: run enrichment first to verify graph changes, then run with --generate-summary to produce RAG summaries.
- When using real LLM APIs, monitor token usage and costs; enable the fake mode during development to iterate quickly.
- If you encounter connectivity issues with Neo4j, verify network access, credentials, and that the Neo4j port (default 7687) is open.
- You can set the LLM selection via --llm-api to switch between providers as needed.
Related MCP Servers
langchain4j-aideepin
基于AI的工作效率提升工具(聊天、绘画、知识库、工作流、 MCP服务市场、语音输入输出、长期记忆) | Ai-based productivity tools (Chat,Draw,RAG,Workflow,MCP marketplace, ASR,TTS, Long-term memory etc)
VectorCode
A code repository indexing tool to supercharge your LLM experience.
octocode
Semantic code searcher and codebase utility
mcp-bsl-platform-context
MCP сервер для AI-ассистентов (справка по синтаксису и объектной модели 1С:Предприятие)
spring-ai-playground
Spring AI Playground is a self-hosted web UI for low-code AI tool development with live MCP server registration. It includes MCP server inspection, agentic chat, and integrated LLM and RAG workflows, enabling real-time experimentation and evolution of tool-enabled AI systems without redeployment.
Archive-Agent
Find your files with natural language and ask questions.