Get the FREE Ultimate OpenClaw Setup Guide →

omnicoreagent

OmniCoreAgent is a powerful Python framework for building autonomous AI agents that think, reason, and execute complex tasks. Production-ready agents that use tools, manage memory, coordinate workflows, and handle real-world business logic.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio omnirexflora-labs-omnicoreagent python -m omnicoreagent \
  --env PYTHONWARNINGS="ignore" \
  --env OMNICORE_API_KEY="your-api-key (if required by provider)"

How to use

OmniCoreAgent is a production-ready AI agent framework with runtime memory backends, context management, and a suite of tools. It supports switching memory stores at runtime (Redis, MongoDB, PostgreSQL, SQLite, or in-memory) and offers built-in guardrails, observability, and workflow orchestration. You can register local Python tools via ToolRegistry and expose them to the agent for actioning tasks, such as data lookups or API calls. The MCP client compatibility means OmniCoreAgent can connect to any MCP server (stdio, SSE, HTTP with OAuth), enabling flexible integration into existing MCP ecosystems. To get started, install the Python package, initialize an OmniCoreAgent with a memory backend, and define a small set of tools you want the agent to use. The example in the README demonstrates creating a weather tool, running the agent, and switching memory stores on the fly without restarting the process.

In practice, you can harness three core capabilities: (1) Memory and context management to maintain long-running conversations and session state; (2) Tool offloading to save large tool outputs to files, reducing token usage; and (3) Guardrails and observability to keep prompts safe and provide per-request metrics and tracing. When you run your agent, you get a response object with the agent's answer and, if desired, persistent memory across sessions. If you need production-scale deployments, you can utilize OmniServe to turn your agent into a REST/SSE API endpoint, enabling external clients to query the agent via standard HTTP interfaces.

How to install

Prerequisites:

  • Python 3.10 or newer
  • pip (comes with Python)
  • Optional: a compatible LLM provider key and network access

Install the package from PyPI:

pip install omnicoreagent

(Optional) Install additional dependencies for your memory backend (examples): Redis, MongoDB, PostgreSQL, or SQLite clients. Install only what you plan to use, e.g.:

pip install redis pymongo psycopg2-binary aiosqlite

Create a basic example script to run the agent:

from omnicoreagent import OmniCoreAgent, MemoryRouter, ToolRegistry

# Define tools
tools = ToolRegistry()

@tools.register_tool("get_weather")
def get_weather(city: str) -> dict:
    return {"city": city, "temp": "22°C", "condition": "Sunny"}

agent = OmniCoreAgent(
    name="my_agent",
    system_instruction="You are a helpful assistant.",
    model_config={"provider": "openai", "model": "gpt-4o"},
    local_tools=tools,
    memory_router=MemoryRouter("redis"),  # or another backend like "mongodb"
    agent_config={
        "context_management": {"enabled": True},
        "guardrail_config": {"strict_mode": True}
    }
)

# Run the agent and switch memory store at runtime if needed
import asyncio
async def main():
    res = await agent.run("What's the weather in Tokyo?")
    print(res["response"])
    await agent.switch_memory_store("mongodb")
    res2 = await agent.run("How about Paris?")
    print(res2["response"])

asyncio.run(main())

If you plan to expose the agent as a web API, consider using OmniServe as described in the docs.

Additional notes

Tips and common issues:

  • Ensure your memory backend is reachable from the host running the MCP server; configure connection strings as needed per backend.
  • When switching memory stores at runtime, verify that the target backend is compatible with your data model to avoid schema issues.
  • Use the ToolRegistry approach to keep your tools modular and testable; include type hints for better integration with the agent’s reasoning.
  • Enable guardrails with a sensible strictness level to protect against prompt injection and unsafe tool usage.
  • If you see token usage spikes, enable Tool Response Offloading to write large tool outputs to files and reduce token expansion.
  • For production deployments, use observability features (metrics and tracing) to monitor latency and reliability of agent runs.

Related MCP Servers

Sponsor this space

Reach thousands of developers