Get the FREE Ultimate OpenClaw Setup Guide →

jqassistant-graph-rag

Source code graph RAG (graphRAG) for Java/Kotlin development based on jQAssistant

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio 2015xli-jqassistant-graph-rag python main.py \
  --env LLM_API="optional: openai|deepseek|ollama|fake" \
  --env NEO4J_URI="bolt://localhost:7687" \
  --env NEO4J_PASSWORD="password" \
  --env NEO4J_USERNAME="neo4j"

How to use

This MCP server provides a Python-based graph enrichment and RAG (retrieval-augmented generation) workflow for a jQAssistant-derived Neo4j graph. It enriches the compiled Java/Kotlin graph with source structure and then uses an LLM to generate multi-level, context-aware summaries for methods, types, files, packages, and the entire project. You can run the script in two modes: (1) enrichment only, which adds semantic properties without querying an LLM, and (2) enrichment plus summary generation, which produces AI-ready descriptions suitable for agents and documentation. To use it, start by ensuring you have a running Neo4j instance populated with a graph generated by jQAssistant, then run the Python script to perform enrichment and optional summarization. You can specify which LLM to use (openai, deepseek, ollama, or a fake tester) and can disable actual API calls by using the fake option for testing.

How to install

Prerequisites:

  • Python 3.13 or higher
  • A running Neo4j instance with a graph generated by jQAssistant
  • pip and virtual environment support

Step-by-step installation:

  1. Clone or download the repository containing main.py and requirements.txt
  2. Create and activate a Python virtual environment:
    • python3 -m venv venv
    • source venv/bin/activate (Linux/macOS) or venv\Scripts\activate (Windows)
  3. Install dependencies:
    • pip install -r requirements.txt
  4. Configure environment variables (optional but recommended) and ensure Neo4j credentials are accessible:
    • Set NE O4J_URI, NE O4J_USERNAME, NE O4J_PASSWORD as needed
  5. Run the script in enrichment mode (no summary generation):
    • python main.py
  6. To run with summary generation (using an LLM):
    • python main.py --generate-summary --llm-api <openai|deepseek|ollama|fake>

Notes:

  • If you want to test without making real LLM calls, use --llm-api fake or enable a local Ollama instance if you have one running.

Additional notes

Tips and common considerations:

  • Ensure the jQAssistant-generated graph is accessible by Neo4j and that the bolt connection is correctly configured.
  • The tool supports partial runs: run enrichment first to verify graph changes, then run with --generate-summary to produce RAG summaries.
  • When using real LLM APIs, monitor token usage and costs; enable the fake mode during development to iterate quickly.
  • If you encounter connectivity issues with Neo4j, verify network access, credentials, and that the Neo4j port (default 7687) is open.
  • You can set the LLM selection via --llm-api to switch between providers as needed.

Related MCP Servers

Sponsor this space

Reach thousands of developers