RiMCP_hybrid
Rimworld Coding RAG MCP server
claude mcp add --transport stdio h7lu-rimcp_hybrid dotnet run --project ./src/RimWorldCodeRag/RimWorldCodeRag.csproj index
How to use
RiMCP_hybrid exposes a modular search and navigation service over RimWorld source code and Def XML definitions. It combines lexical search, semantic embeddings, and a graph-based navigation layer to provide structured, source-backed responses that an AI assistant can call. The server can be used either as a command-line tool for offline testing or as an MCP server that integrates with client assistants (e.g., Claude Desktop, VS Code Copilot) to perform index construction, mixed retrieval, and graph queries. Typical use involves building or updating the index, generating embeddings (optionally via a remote embedding API), and then issuing navigation or search requests through the server’s JSON-RPC interface.
You’ll have access to four core tooling pathways: rough-search for lexical filtering, get-item to fetch full source blocks by symbol, get-uses and get-used-by to traverse dependency relationships, and index-based commands to construct or refresh the inverted index, embeddings, and graph relations. The workflow is designed to minimize noisy returns by first narrowing down candidates with a fast lexical stage, then applying semantic similarity on a smaller candidate set, and finally filtering through the graph to locate the exact blocks or definitions you need. For large-scale usage, you can enable a persistent embedding server or remote embeddings via an API key and model name to connect to external embedding services when needed.
To use the MCP server, deploy the server binary via the configured command, then send JSON-RPC requests or use the provided CLI paths to perform actions such as building the index, starting embedding servers, and querying for symbols or definitions. The system is capable of returning structured results, including source blocks with optional line limits, or navigational paths across the graph of code elements.
How to install
Prerequisites:
- .NET 8.0 SDK
- Python 3.9+ (optional for local embedding server or external embedding calls)
- RimWorld game data and source definitions placed as described in README (RimWorldData directory for Defs and C# sources)
- Basic shell or PowerShell access depending on OS
Installation steps:
- Clone or download the repository containing RiMCP_hybrid.
- Ensure prerequisites are installed:
- Install .NET 8.0 SDK from https://dotnet.microsoft.com/download
- Install Python 3.9+ from https://python.org
- Prepare data as described in README:
- Copy RimWorld Def data to RimWorldData
- Extract or generate C# sources as needed for indexing
- Build and run the index tooling:
- On Windows: open a shell and run the commands from the README, for example: cd src\RimWorldCodeRag dotnet run -- index --root "....\RimWorldData"
- On Unix-like systems: adapt paths to your environment, e.g.: cd src/RimWorldCodeRag dotnet run -- index --root "../../RimWorldData"
- (Optional) Start a persistent embedding server or connect to a remote embedding API as described in the README, e.g.:
- Start local embedding server: ensure you have a server listening on http://127.0.0.1:5000 and then run embedding-enabled indexing: dotnet run -- index --root "../../RimWorldData" --embedding-server "http://127.0.0.1:5000"
- Use remote embeddings with API key and model name: dotnet run -- index --root "../../RimWorldData" --embedding-server "https://api.openai.com/v1/embeddings" --api-key "sk-..." --model-name "text-embedding-3-small"
- If you need to run the MCP server, configure the mcp_config as described and start the service via your chosen host (the repository’s README provides examples for direct command usage).
Additional notes
Tips and common considerations:
- Embeddings: You can run a local embedding server to reduce cold starts, or switch to a remote embedding API by providing --api-key and --model-name. Adjust the batch size for embedding to fit your GPU/VRAM constraints (e.g., 128–512 depending on hardware).
- Indexing strategy: The project supports incremental updates but may require a full rebuild when index schemas change. Use --force with specific targets (lucene, embed, graph) to control what gets rebuilt.
- Performance knobs: If running on limited hardware, consider disabling embeddings or reducing batch sizes to keep latency predictable. Faiss-based full-vector search is intended for large-scale retrieval when properly provisioned.
- Graph weighting: The system assigns edge weights and pagerank-based importance to edges. Weights are heuristic; for production, you might want to tune or replace with a learned policy if you have feedback data.
- Networking: If you opt for a remote embedding API, ensure your environment variables (API keys) are secured and not logged by the server. Use environment variables for sensitive data in deployment scenarios.
- Troubleshooting: If you see excessive results in get-uses/get-used-by, consider refining relationships extraction or applying edge-priority weighting. The README notes a past issue where overly broad edges caused excessive outputs; a lightweight result-sorting and pagination strategy helps maintain predictable responses.
Related MCP Servers
mcp-in-action
极客时间MCP新课已经上线!超2000同学一起开启MCP学习之旅!
pluggedin-app
The Crossroads for AI Data Exchanges. A unified, self-hostable web interface for discovering, configuring, and managing Model Context Protocol (MCP) servers—bringing together AI tools, workspaces, prompts, and logs from multiple MCP sources (Claude, Cursor, etc.) under one roof.
mcpx-py
Python client library for https://mcp.run - call portable & secure tools for your AI Agents and Apps
mcp-manager
CLI tool for managing Model Context Protocol (MCP) servers in one place & using them across them different clients
ia-na-pratica
IA na Prática: LLM, RAG, MCP, Agents, Function Calling, Multimodal, TTS/STT e mais
mcp-raganything
API/MCP wrapper for RagAnything