Get the FREE Ultimate OpenClaw Setup Guide →

mcca

Comprehensive Model Context Protocol client enabling AI applications to connect with external tools and services. Features streaming support, multiple LLM integrations, and modular architecture for seamless AI-tool communication.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio cloaky233-mcca python path/to/your/mcca_server_script.py \
  --env GITHUB_TOKEN="your_github_token_here (needed if using GitHub GPT model integration)" \
  --env GEMINI_API_KEY="your_gemini_api_key_here (optional for Gemini integration)"

How to use

This MCP server (MCCA) exposes a standardized interface that allows an AI client to discover, call tools, access resources, and apply prompts exposed by the server. Once running, you can connect a MCP client (CLI or Web UI) to the server, configure which server to use via a config.json, and then issue natural language queries that the client will translate into tool calls and data fetches. The server supports multi-step tool executions and can stream results back to the client as they are computed. Typical usage involves starting the server with a Python command, registering it in a client configuration, and then using the CLI or web interface to chat with the LLM-backed agent that leverages the server’s tools, resources, and prompts. Tools enable function-like calls the LLM can invoke, Resources expose data sources the LLM can GET, and Prompts provide templates to guide tool usage and responses.

How to install

Prerequisites:

  • Python 3.10+
  • Git
  • Optional: GitHub API token (for GitHub GPT model integration) and Gemini API key (for Gemini integration)

Setup steps:

  1. Clone the repository and install dependencies
git clone https://github.com/cloaky233/mcca.git
cd mcca

# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate  # macOS/Linux
# .venv\Scripts\activate  # Windows

# Install the package in editable or standard mode
pip install .
# For development/editable install
pip install -e .
  1. Configure environment variables Create a .env file in the project root (or export in your shell) and add:
GITHUB_TOKEN=your_github_token_here
# GEMINI_API_KEY=your_gemini_api_key_here  # if using Gemini
  1. Run the MCP server
# Start the server (adjust path to your server script if needed)
python path/to/your/mcca_server_script.py
  1. Connect an MCP client Create a config.json that references the server, for example:
{
  "context_servers": {
    "MCCA_Server": {
      "command": {
        "path": "python",
        "args": ["path/to/your/mcca_server_script.py"],
        "env": {
          "GITHUB_TOKEN": "your_github_token_here"
        }
      },
      "settings": {}
    }
  }
}

Then use the CLI or web UI to connect to this server and start interacting with the MCP client.

Note: The exact script path and entry points may vary depending on how the server is packaged in this repository. If a module entry point is provided (e.g., python -m mcca), prefer that invocation.

Additional notes

Tips and caveats:

  • Ensure your Python virtual environment is active when running the server and installing dependencies.
  • If using GitHub GPT, keep your GITHUB_TOKEN secure and avoid committing it.
  • The server exposes Tools, Resources, and Prompts; explore the configuration guide in docs/configuration.md to tailor what is exposed and how the LLM should interact with it.
  • For debugging, use the CLI's debug commands and streaming outputs to diagnose tool execution flow.
  • If the server binds to a port or requires network access, ensure firewall settings allow local connections from your MCP client.
  • If you upgrade the package, re-run pip install . to refresh dependencies in your environment.

Related MCP Servers

Sponsor this space

Reach thousands of developers