omnicoreagent
OmniCoreAgent is a powerful Python framework for building autonomous AI agents that think, reason, and execute complex tasks. Production-ready agents that use tools, manage memory, coordinate workflows, and handle real-world business logic.
claude mcp add --transport stdio omnirexflora-labs-omnicoreagent python -m omnicoreagent \ --env PYTHONWARNINGS="ignore" \ --env OMNICORE_API_KEY="your-api-key (if required by provider)"
How to use
OmniCoreAgent is a production-ready AI agent framework with runtime memory backends, context management, and a suite of tools. It supports switching memory stores at runtime (Redis, MongoDB, PostgreSQL, SQLite, or in-memory) and offers built-in guardrails, observability, and workflow orchestration. You can register local Python tools via ToolRegistry and expose them to the agent for actioning tasks, such as data lookups or API calls. The MCP client compatibility means OmniCoreAgent can connect to any MCP server (stdio, SSE, HTTP with OAuth), enabling flexible integration into existing MCP ecosystems. To get started, install the Python package, initialize an OmniCoreAgent with a memory backend, and define a small set of tools you want the agent to use. The example in the README demonstrates creating a weather tool, running the agent, and switching memory stores on the fly without restarting the process.
In practice, you can harness three core capabilities: (1) Memory and context management to maintain long-running conversations and session state; (2) Tool offloading to save large tool outputs to files, reducing token usage; and (3) Guardrails and observability to keep prompts safe and provide per-request metrics and tracing. When you run your agent, you get a response object with the agent's answer and, if desired, persistent memory across sessions. If you need production-scale deployments, you can utilize OmniServe to turn your agent into a REST/SSE API endpoint, enabling external clients to query the agent via standard HTTP interfaces.
How to install
Prerequisites:
- Python 3.10 or newer
- pip (comes with Python)
- Optional: a compatible LLM provider key and network access
Install the package from PyPI:
pip install omnicoreagent
(Optional) Install additional dependencies for your memory backend (examples): Redis, MongoDB, PostgreSQL, or SQLite clients. Install only what you plan to use, e.g.:
pip install redis pymongo psycopg2-binary aiosqlite
Create a basic example script to run the agent:
from omnicoreagent import OmniCoreAgent, MemoryRouter, ToolRegistry
# Define tools
tools = ToolRegistry()
@tools.register_tool("get_weather")
def get_weather(city: str) -> dict:
return {"city": city, "temp": "22°C", "condition": "Sunny"}
agent = OmniCoreAgent(
name="my_agent",
system_instruction="You are a helpful assistant.",
model_config={"provider": "openai", "model": "gpt-4o"},
local_tools=tools,
memory_router=MemoryRouter("redis"), # or another backend like "mongodb"
agent_config={
"context_management": {"enabled": True},
"guardrail_config": {"strict_mode": True}
}
)
# Run the agent and switch memory store at runtime if needed
import asyncio
async def main():
res = await agent.run("What's the weather in Tokyo?")
print(res["response"])
await agent.switch_memory_store("mongodb")
res2 = await agent.run("How about Paris?")
print(res2["response"])
asyncio.run(main())
If you plan to expose the agent as a web API, consider using OmniServe as described in the docs.
Additional notes
Tips and common issues:
- Ensure your memory backend is reachable from the host running the MCP server; configure connection strings as needed per backend.
- When switching memory stores at runtime, verify that the target backend is compatible with your data model to avoid schema issues.
- Use the ToolRegistry approach to keep your tools modular and testable; include type hints for better integration with the agent’s reasoning.
- Enable guardrails with a sensible strictness level to protect against prompt injection and unsafe tool usage.
- If you see token usage spikes, enable Tool Response Offloading to write large tool outputs to files and reduce token expansion.
- For production deployments, use observability features (metrics and tracing) to monitor latency and reliability of agent runs.
Related MCP Servers
better-chatbot
Just a Better Chatbot. Powered by Agent & MCP & Workflows.
AgentChat
AgentChat 是一个基于 LLM 的智能体交流平台,内置默认 Agent 并支持用户自定义 Agent。通过多轮对话和任务协作,Agent 可以理解并协助完成复杂任务。项目集成 LangChain、Function Call、MCP 协议、RAG、Memory、Milvus 和 ElasticSearch 等技术,实现高效的知识检索与工具调用,使用 FastAPI 构建高性能后端服务。
skillz
An MCP server for loading skills (shim for non-claude clients).
mcp-toolbox-sdk-python
Python SDK for interacting with the MCP Toolbox for Databases.
python -client
支持查询主流agent框架技术文档的MCP server(支持stdio和sse两种传输协议), 支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai
supermcp
🚀 SuperMCP - Create multiple isolated MCP servers using a single connector. Build powerful Model Context Protocol integrations for databases (PostgreSQL, MSSQL) with FastAPI backend, React dashboard, and token-based auth. Perfect for multi-tenant apps and AI assistants.