Get the FREE Ultimate OpenClaw Setup Guide →

v -aggregator

this is a simple mcp server which does tool discovery in multiple mcp servers and puts in vector database so that llms are quickly be able to search available mcp tools as an aggregate capability and call them in dependent way, so it will increase the performance of the mcp system

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio virajsharma2000-v-mcp-aggregator-server uvicorn aggregator_server:app --port 8090 \
  --env MCP_SERVERS="http://localhost:8081,http://localhost:8082" \
  --env PINECONE_API_KEY="your-pinecone-key" \
  --env PINECONE_INDEX_NAME="mcp-tools" \
  --env PINECONE_ENVIRONMENT="ur-env-name"

How to use

This MCP server acts as an aggregator that pulls tools from other MCP servers (for example, gotta-catch-em-all) and uses Pinecone as a central brain to store tool metadata. When you ask a question, it searches the Pinecone index to identify the best tool to run for your query and then executes that tool to return results. The server exposes an endpoint for tool discovery and execution, and can also be invoked by a client (or an LLM) to perform actions via a standardized tool interface. You can interact with it by sending a search query, and you can also instruct an automated agent (an LLM) to call the tool by providing a JSON structure with the tool name and arguments. For example, you can curl the endpoint to search for tools related to a natural language query, and the system will select and run the optimal tool from the connected MCP servers. The LLM workflow is demonstrated by a sample JSON payload that requests a specific tool with arguments, which the aggregator will execute and return the result of. This design lets you chain multiple tools and leverage Pinecone as a knowledge base to pick the best match for each user request.

How to install

Prerequisites:

  • Python 3.11+ installed on your system
  • Internet access to install dependencies
  • Basic familiarity with running commands in a shell

Step-by-step installation:

  1. Create and activate a Python virtual environment (recommended):
uv venv .venv
source .venv/bin/activate
  1. Install dependencies listed in the project manifest (pyproject.toml):
uv pip install -r pyproject.toml
  1. Create a .env file with required configuration (example values shown):
MCP_SERVERS=http://localhost:8081,http://localhost:8082
PINECONE_API_KEY=your-pinecone-key
PINECONE_ENVIRONMENT=ur-env-name
PINECONE_INDEX_NAME=mcp-tools
  1. Run the server (as shown in the project README):
uvicorn aggregator_server:app --port 8090
  1. Verify it's running by hitting the endpoint (example):
curl "http://localhost:8090/tools/mcp_aggregator?search=do u kno the weather in delhi?"

Notes:

  • Ensure your MCP_SERVERS endpoints are reachable from where you run the aggregator.
  • The Pinecone keys and environment must be valid for the index to be accessible.

Additional notes

Tips and common issues:

  • Ensure your Pinecone index (mcp-tools) exists and that the API key and environment values are correct; misconfigurations here will prevent tool discovery.
  • If you see connection errors to MCP servers, verify their base URLs are correct and that they are running.
  • The environment variables can be adjusted via your hosting platform or .env file; keep MCP_SERVERS, PINECONE_API_KEY, PINECONE_ENVIRONMENT, and PINECONE_INDEX_NAME in sync with your infrastructure.
  • When testing with the example curl, you may need to adjust port numbers if you’re deploying on a different host or port.
  • The aggregator relies on other MCP tools; ensure those servers are accessible and compatible with the tool interface used by the aggregator.

Related MCP Servers

Sponsor this space

Reach thousands of developers