Get the FREE Ultimate OpenClaw Setup Guide →

ToolsFilter

Fetch only relevant tools for the current conversation and save cost while increasing the precision of your LLM Response

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio oppieai-toolsfilter uvx oppieai-toolsfilter \
  --env REDIS_URL="redis://localhost:6379" \
  --env QDRANT_URL="http://localhost:6333" \
  --env OPENAI_API_KEY="<your-openai-api-key>" \
  --env QDRANT_API_KEY="<optional-api-key-if-needed>"

How to use

ToolsFilter is a precision-driven MCP server that filters a large MCP tool set to the most relevant tools for the current conversation. Built around a multi-stage search pipeline, it combines semantic similarity, traditional BM25 ranking, cross-encoder reranking, and a learning-to-rank (LTR) model to surface a small, high-quality set of tools. The server exposes an API compatible with MCP tooling conventions and OpenAI function calling, allowing LLMs to query for the exact tools needed without loading an overabundance of options. You can expect to retrieve the 3–5 most relevant tools for a given chat context, reducing token costs and improving response precision. The system relies on a vector store (Qdrant), embeddings providers, and caching layers to deliver fast results while maintaining high recall for relevant tools.

Typical use cases include: (1) given a user query or conversation context, ToolsFilter returns a ranked subset of MCP tools most likely to be useful, (2) you can integrate the response with your LLM to request those tools via the MCP protocol, and (3) you can tune retrieval strategies (semantic_only, hybrid_basic, hybrid_cross_encoder, hybrid_ltr_full) to balance speed and accuracy for your workload.

How to install

Prerequisites:

  • Python 3.8+ and pip
  • A running Redis instance
  • A running Qdrant vector database (or a compatible vector store) with the tool embedding indices
  • Access to an embedding provider (Voyage AI, OpenAI, or Cohere) depending on configuration

Installation steps:

  1. Create and activate a virtual environment python -m venv venv source venv/bin/activate # on macOS/Linux venv\Scripts\activate # on Windows

  2. Install the ToolsFilter package (from PyPI or source repository) pip install oppieai-toolsfilter # or pip install -e . if you have the repo checked out

  3. Install and start supporting services (in separate terminals or as a managed service)

    • Start Redis (e.g., redis-server)
    • Start Qdrant (e.g., docker run -p 6333:6333 qdrant/qdrant:latest)
  4. Configure environment variables per below (examples):

  5. Run the MCP server via the chosen runner (see mcp_config for uvx usage): uvx oppieai-toolsfilter

Notes:

  • If you prefer development mode, clone the repository and run the package directly with uvicorn-like tooling provided by your environment.
  • Ensure your embeddings provider is accessible and configured in the application settings.

Additional notes

Tips and common issues:

  • If you see latency during initial tool index loading, ensure Redis and Qdrant are reachable and that tool embeddings indices are loaded.
  • When switching search strategies (semantic_only, hybrid_basic, hybrid_cross_encoder, hybrid_ltr_full), be aware of potential differences in latency and ranking quality; hybrid strategies may incur more compute due to reranking.
  • Environment variables can be extended with model versioning metadata and collection metadata to improve LTR feature extraction.
  • If using OpenAI function calling, ensure the tool schema follows the expected OpenAI function spec (name, parameters, required fields).
  • For deployments, consider containerized setups (Docker) to simplify dependency management and environment isolation.

Related MCP Servers

Sponsor this space

Reach thousands of developers