kgrag_mcp_server
Krag MCP Server is a modular system for managing, ingesting, and querying structured and unstructured data with knowledge graph, AI, and information flow automation applications.
claude mcp add --transport stdio gzileni-kgrag_mcp_server docker compose up -d
How to use
KGrag MCP Server implements the Model Context Protocol (MCP) to manage, ingest, and query structured and unstructured data within a knowledge graph. The stack is designed to integrate with Neo4j for graph storage, AWS S3 for object storage, Redis for caching, and a vector search component (such as Qdrant) for embedding-based queries, along with large language models for reasoning and enrichment. The project is distributed as containerized services orchestrated with Docker Compose to enable end-to-end data pipelines, semantic enrichment, and advanced querying capabilities. You can use the provided tools to ingest content into the graph and then query it to retrieve answers that reflect the relationships and context across your data.
Two core tools are exposed: ingestion and query. The ingestion tool takes a filesystem path to a document and ingests it into the knowledge graph, triggering any enrichment or parsing workflows configured in the pipeline. The query tool interfaces with the knowledge graph to return answers based on stored documents, metadata, and relationships. In an agent-based workflow (for example, using GitHub Copilot in VSCode), you can configure an mcp.json file to describe a server endpoint and its type (such as SSE) and let Copilot generate ingestion and querying code, including error handling and batching. A typical server endpoint in this setup is http://localhost:8000/sse, which can be used as the connection point for agent-based workflows and client integrations.
In practice, you would run the server via Docker Compose, then use the ingestion tool to push documents into the graph and the query tool to retrieve contextually rich answers from the Knowledge Graph.
How to install
Prerequisites:
- Docker and Docker Compose installed on your machine
- Git (optional, for cloning the repository)
- Basic familiarity with command-line tools and container workflows
Installation steps:
-
Clone or download the repository: git clone https://github.com/gzileni/kgrag_mcp_server.git cd kgrag_mcp_server
-
Ensure Docker daemon is running on your system.
-
Start the MCP server stack using Docker Compose: docker compose up -d
-
Verify the services are up and listening. You should be able to access the MCP endpoints, for example the SSE endpoint at http://localhost:8000/sse (as referenced in the example configuration).
-
(Optional) If you need to customize environment variables or persistent storage, create a .env file or modify the docker-compose.yml accordingly and re-run: docker compose down docker compose up -d
-
When done, you can stop the services with: docker compose down
Additional notes
Notes and tips:
- The project emphasizes containerized orchestration via Docker Compose, so the primary run method is docker compose up. If you adapt to a different environment, ensure all dependencies (Neo4j, S3 credentials, Redis, Qdrant, and LLM providers) are properly configured and accessible from the container network.
- Ensure that environment variables for storage credentials, database connections, and API keys are provided if you customize the compose stack (for example, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, NEO4J_URI, REDIS_URL, QDRANT_API_KEY, etc.).
- The example mcp.json in the documentation shows an SSE-based server endpoint; adapt your client or agent tooling to the endpoint type you deploy. If you modify ports or endpoints, update the mcp.json accordingly.
- If you encounter issues with ingestion or query performance, verify that the underlying services (graph database, vector store, and LLM) are healthy and reachable from the containers. Logs (docker compose logs) can help diagnose connectivity or authentication problems.
Related MCP Servers
langchain -adapters
LangChain 🔌 MCP
evo-ai
Evo AI is an open-source platform for creating and managing AI agents, enabling integration with different AI models and services.
ReActMCP
ReActMCP is a reactive MCP client that empowers AI assistants to instantly respond with real-time, Markdown-formatted web search insights powered by the Exa API.
pfsense
pfSense MCP Server enables security administrators to manage their pfSense firewalls using natural language through AI assistants like Claude Desktop. Simply ask "Show me blocked IPs" or "Run a PCI compliance check" instead of navigating complex interfaces. Supports REST/XML-RPC/SSH connections, and includes built-in complian
mattermost -host
A Mattermost integration that connects to Model Context Protocol (MCP) servers, leveraging a LangGraph-based Agent.
MCP-MultiServer-Interoperable-Agent2Agent-LangGraph-AI-System
This project demonstrates a decoupled real-time agent architecture that connects LangGraph agents to remote tools served by custom MCP (Modular Command Protocol) servers. The architecture enables a flexible and scalable multi-agent system where each tool can be hosted independently (via SSE or STDIO), offering modularity and cloud-deployable execut