AgentX s
List of open sourced MCP servers. MIT license. Managed by AgentX with love.
claude mcp add --transport stdio agentx-ai-agentx-mcp-servers python -m agentx_server \ --env ENV="production" \ --env DEBUG="false" \ --env OPENAI_API_KEY="your-openai-api-key-if-needed"
How to use
AgentX provides a collection of MCP servers that expose external tools and services to AI agents via the Model Context Protocol. These servers are designed to be drop-in integrations that your MCP-enabled agents can connect to in order to fetch data, perform actions, or leverage external capabilities like content extraction, data enrichment, or specialized workflows. Once the server is running, you can interact with it using the MCP Inspector or your agent’s MCP client to request tools, run actions, and retrieve results in a standardized format. The collection includes servers for activities such as YouTube data extraction, finance data access, Notion content manipulation, and more, all designed to be production-ready with sensible defaults and security considerations built in.
To use a server, start it with the appropriate command from your environment (as shown in the installation guide), then connect to it from your MCP-enabled agent. The MCP Inspector can help you validate connectivity and capabilities by running uvx (Python/uv) based tests or other supported runtimes. You can inspect the server’s available tools, call them with the expected context, and handle responses within your agent’s workflow.
How to install
Prerequisites:
- Python 3.9+ installed on your system
- Access to install Python packages (pip)
- Optional: virtual environment tool (venv/virtualenv) for isolation
Installation steps:
-
Ensure Python is installed
- macOS/Linux: python3 --version
- Windows: py --version or python --version
-
Create a virtual environment (recommended)
- python -m venv venv
- source venv/bin/activate # macOS/Linux
- .\venv\Scripts\activate # Windows
-
Install the AgentX MCP server package (example placeholder. Replace with the actual package name if provided by the repository):
- pip install agentx-mcp-server # replace with the correct package name if available
-
Run the server (as described by the project). Example:
- python -m agentx_server
-
Verify with MCP Inspector
- npm install -g @modelcontextprotocol/inspector
- npx @modelcontextprotocol/inspector uvx run agentx-mcp-server
Notes:
- If you are using a docker-based deployment, follow the repository’s specific docker run commands.
- Ensure your environment variables (like API keys) are set in a secure way before starting the server.
Additional notes
Tips and common issues:
- If you see authentication or permission errors, verify that API keys and tokens are correctly configured in environment variables.
- For Python servers, ensure you are using a supported Python version (3.9+). Virtual environments help avoid dependency conflicts.
- Use the MCP Inspector to validate that the server exposes the expected tools and that your agent can discover and invoke them.
- When contributing new servers, follow the repository’s template and naming conventions to ensure compatibility with the MCP ecosystem.
- If running multiple servers, consider containerization (Docker) to isolate environments and simplify deployment.
Related MCP Servers
jadx-ai
Plugin for JADX to integrate MCP server
sandbox
A Model Context Protocol (MCP) server that enables LLMs to run ANY code safely in isolated Docker containers.
mem0
✨ mem0 MCP Server: A memory system using mem0 for AI applications with model context protocl (MCP) integration. Enables long-term memory for AI agents as a drop-in MCP server.
ummon
The semantic layer for software engineering: Connect code to meaning, build on understanding
asterisk
Asterisk Model Context Protocol (MCP) server.
mindbridge
MindBridge is an AI orchestration MCP server that lets any app talk to any LLM — OpenAI, Anthropic, DeepSeek, Ollama, and more — through a single unified API. Route queries, compare models, get second opinions, and build smarter multi-LLM workflows.