agent -logging
Demonstrates structured logging of Agentic AI calls to MCP servers, building a Phishing Triage Assistant as an example
claude mcp add --transport stdio realm-security-agent-mcp-logging uvx run -- python mcp_server.py \ --env LOG_LEVEL="INFO" \ --env LANGUAGE_MODEL="default_llm_if_override_not_set_elsewhere" \ --env AWS_ACCESS_KEY_ID="your_aws_access_key_id (for AWS Bedrock interactions if using Bedrock via LangChain)" \ --env LANGCHAIN_API_KEY="your_langchain_api_key (if required by the LangChain provider in use)" \ --env AWS_SECRET_ACCESS_KEY="your_aws_secret_access_key (for AWS Bedrock interactions if using Bedrock via LangChain)"
How to use
This MCP server provides structured logging capabilities for AI agents interacting within an MCP ecosystem. The Python-based server (mcp_server.py) works in tandem with the agent client (agent_client.py) to capture logs in a consistent, machine-readable format suitable for ingestion into SIEM and SOAR workflows. The setup is designed for phishing triage workflows where an AI agent analyzes potential phishing indicators, logs tool usage, decisions, and outcomes, and emits these traces in a structured manner for real-time monitoring and automated remediation pipelines. To operate, start the MCP server using uv, then run the agent client in a separate terminal. The agent relies on LangGraph and LangChain integrations to access an LLM (defaulting to Anthropic Claude Sonnet 3.7 via AWS Bedrock in this configuration) and perform tasks such as URL analysis, content inspection, and tool invocation. Ensure your cloud credentials and API keys for the LLM provider are available to the LangChain stack to enable seamless access.
How to install
Prerequisites:
- Python 3.10+ installed
- uv (Python lightweight async micro-framework runner) installed
- Access to a supported LLM provider (e.g., Anthropic/Bedrock) if using the default setup
Installation steps:
-
Create and activate a Python virtual environment: python -m venv venv source venv/bin/activate # on macOS/Linux .\venv\Scripts\activate # on Windows
-
Install uv (and any other required packages) via pip: pip install uv
If repository includes a requirements file, install dependencies:
pip install -r requirements.txt
-
Place the MCP server files (mcp_server.py and agent_client.py) in your project directory. Ensure you have the proper permissions to read/write logs as configured by your environment.
-
Prepare environment variables if needed (see mcp_config for placeholders). You may also configure your LLM credentials and LangChain settings as environment variables or within your code.
-
Run the MCP server using uv: uv run -- python mcp_server.py
-
In a separate terminal, run the AI agent: uv run -- python agent_client.py
Tip: If you modify the environment or credentials, restart both processes to ensure updates are picked up.
Additional notes
Notes and tips:
- This setup relies on uv to launch Python modules. The server and agent communicate under the MCP protocol with structured logging for SIEM/SOAR pipelines.
- Ensure your LLM provider credentials are correctly configured and accessible to LangChain. The default path uses AWS Bedrock for Anthropic Claude Sonnet 3.7 integration; adjust according to your provider.
- If you encounter network or authentication errors, verify that environment variables (e.g., AWS keys, LangChain API keys) are set correctly and that your environment has outbound access to the LLM endpoints.
- For richer logging, customize the log fields emitted by the agent (e.g., tool usage, decision points, timestamps) to align with your SIEM schema.
- The npm_package field is null since this is a Python-based MCP server, not Node.js.
Related MCP Servers
pgmcp
An MCP server to query any Postgres database in natural language.
MCP-PostgreSQL-Ops
🔍Professional MCP server for PostgreSQL operations & monitoring: 30+ extension-independent tools for performance analysis, table bloat detection, autovacuum monitoring, schema introspection, and database management. Supports PostgreSQL 12-17.
mcp -js
MCP server that exposes YepCode processes as callable tools for AI platforms. Securely connect AI assistants to your YepCode workflows, APIs, and automations.
zerodha
Zerodha MCP Server & Client - AI Agent (w/Agno & w/Google ADK)
Common_Chronicle
Common Chronicle turns messy context into structured, sourced timelines.
symfony
A Symfony package designed for building secure servers based on the Model Context Protocol, utilizing Server-Sent Events (SSE) and/or StreamableHTTP for real-time communication. It offers a scalable tool system tailored for enterprise-grade applications.