Get the FREE Ultimate OpenClaw Setup Guide →

agentic -client

A standalone agent runner that executes tasks using MCP (Model Context Protocol) tools via Anthropic Claude, AWS BedRock and OpenAI APIs. It enables AI agents to run autonomously in cloud environments and interact with various systems securely.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio peakmojo-agentic-mcp-client docker run -i -e MACOS_USERNAME=your_username -e MACOS_PASSWORD=your_password -e MACOS_HOST=your_host_ip --rm buryhuang/mcp-remote-macos-use:latest

How to use

Agentic MCP Client is a standalone agent runner designed to execute tasks by leveraging MCP tools through multiple model providers (including Anthropic Claude, AWS Bedrock, and OpenAI). The client orchestrates a task by initializing MCP-enabled tools, sending a task to a chosen language model, and processing the model’s responses. If the model decides to call a tool, the client executes the corresponding tool (e.g., a Docker-based MCP service) and feeds the results back to the model, continuing until the task completes or a maximum iteration limit is reached. This enables autonomous agents to operate across cloud environments and interact with systems securely.

How to install

Prerequisites:

  • Docker installed (for MCP tool containers) or a Python environment capable of running the client
  • Access to a machine with network access to your desired MCP tool containers and LLM provider

Installation steps:

  1. Clone the repository:
git clone https://github.com/peakmojo-agentic-mcp-client.git
cd peakmojo-agentic-mcp-client
  1. Install Python dependencies (if using a Python environment):
pip install -r requirements.txt
  1. Install/prepare the MCP client runtime (as described in the README):
  • If using uv (Python):
uv sync
  1. Prepare configuration files:
  • Create a config.json in the project root defining your inference server settings and available MCP tools (as shown in the README example).
  1. Run the agent worker:
uv run agentic_mcp_client/agent_worker/run.py
  1. Start the dashboard (optional):
cd dashboard
npm i
npm run dev

Prerequisites recap:

  • Python anduvx environment (for uv-based commands) or Docker for MCP tools
  • Access to LLM providers (Anthropic/OpenAI Bedrock) as configured in config.json
  • Docker if using containerized MCP tools

Additional notes

Tips and common considerations:

  • The MCP tool definitions in config.json are typically Docker-based containers. Ensure the image names (e.g., buryhuang/mcp-remote-macos-use:latest) exist and are accessible from your environment.
  • When using Bedrock or other LLM providers, securely store API keys (e.g., in config.json or environment variables) and avoid committing secrets to version control.
  • The agent supports a configurable max_iterations to control how long the task runs; adjust this to balance thoroughness with resource usage.
  • If you encounter connection issues to LLM providers, verify network access, API keys, and base_url endpoints in inference_server.
  • The dashboard provides a UI for monitoring tasks and agent progress; ensure Node.js and dependencies are installed if you plan to use it.

Related MCP Servers

Sponsor this space

Reach thousands of developers