phantom-neural-cortex
Professional multi-AI development environment with intelligent cost optimization (<5/month)
claude mcp add --transport stdio leei1337-phantom-neural-cortex python run_agent.py --config agents/<agent-name>/AGENT.yaml --gateway \ --env MM_GATEWAY_HOST="localhost" \ --env MM_GATEWAY_PORT="18789" \ --env PYTHONUNBUFFERED="1"
How to use
Phantom Neural Cortex (PNC) is a unified AI employee system that orchestrates planning, security, tool execution, and team communication in one deployable package. It combines the PhantomAgent runtime, the HRM Controller for planning, NSS security layers, the echo_log tool execution environment, and a Mattermost bridge for team messaging. When started in gateway mode, the system exposes an API for task submission, routes tasks through NSS and planners, executes steps via local or cloud LLMs, and provides real-time updates to your Mattermost channel. This makes it suitable for automating complex workflows, risk-scored task execution, and collaborative task management with built-in kill-switch capabilities for safety.
To use it, start the agent with a configured AGENT.yaml, which defines the agent’s role, language models, and security posture. The gateway coordinates orchestration, while the agent handles task execution and reporting. You can submit tasks via the REST endpoint at the gateway (for example, http://localhost:18789/agent/<agent-name>/task) and monitor progress and results through Mattermost or the local TUI if you enable the Kommandozentrale component. The system also supports a docker-compose setup for full-stack deployment, including NSS services, databases, and the messaging bridge, so you can run a complete, production-like environment with a single command.
How to install
Prerequisites:
- Git
- Python 3.8+ and pip
- Optional: Docker and Docker Compose for full-stack deployment
- Access to a compatible LLM (cloud or Ollama local) as configured in AGENT.yaml
Step 1: Clone the repository
git clone https://github.com/LEEI1337/phantom-neural-cortex
cd phantom-neural-cortex
Step 2: Create a virtual environment and install dependencies
python -m venv venv
source venv/bin/activate # on Unix/macOS
venv\Scripts\activate # on Windows
pip install -r requirements.txt
Step 3: Prepare an agent configuration
- Use the provided template in config/templates (e.g., lisa01.yaml) or create your own AGENT.yaml
- Ensure the AGENT.yaml defines:
- agent.name
- agent.role
- llm.planner / llm.executor
- security.nss_enabled and related settings
Step 4: Run the gateway with an agent configuration
# Example (adjust paths to your environment)
python run_agent.py --config agents/lisa01/AGENT.yaml --gateway
Step 5: (Optional) Deploy the full stack with Docker Compose
docker compose up -d
Step 6: Verify operation
- Access the gateway REST API at http://localhost:18789
- Connect Mattermost bridges if configured and monitor agent activity
Notes:
- If you need to customize environment, edit the AGENT.yaml and the gateway config to match your deployment.
- For local testing, you can run a single agent in gateway mode with a sample AGENT.yaml path that you provide.
Additional notes
Tips and common considerations:
- NSS layers (Sentinel, Mars, Vigil, Shield) provide defense-in-depth. If NSS is offline, the system can degrade safely to SAFE mode but with reduced protections.
- The 3-way killswitch can be triggered via terminal, Mattermost command, or REST API. Ensure proper ownership and access controls for your environment.
- The 42 tools exposed through echo_log enable a wide range of actions; ensure you review and approve tool usage in your organization, especially for sensitive operations.
- When using the full Docker Compose stack, ensure your environment has sufficient resources (CPU, RAM, and database storage) to support the gateway, NSS services, and memory backends.
- The AGENT.yaml schema is validated; use config/templates as reference and validate your own configs against the schema to avoid deployment errors.
- Environment variables like MM_GATEWAY_HOST and MM_GATEWAY_PORT should reflect your actual gateway address when running in non-local environments.
Related MCP Servers
repomix
📦 Repomix is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.
ai-guide
程序员鱼皮的 AI 资源大全 + Vibe Coding 零基础教程,分享大模型选择指南(DeepSeek / GPT / Gemini / Claude)、最新 AI 资讯、Prompt 提示词大全、AI 知识百科(RAG / MCP / A2A)、AI 编程教程、AI 工具用法(Cursor / Claude Code / OpenClaw / TRAE / Lovable / Agent Skills)、AI 开发框架教程(Spring AI / LangChain)、AI 产品变现指南,帮你快速掌握 AI 技术,走在时代前沿。本项目为开源文档版本,已升级为鱼皮 AI 导航网站
yutu
A fully functional MCP server and CLI for YouTube
c4-genai-suite
c4 GenAI Suite
forge-orchestrator
Forge Orchestrator: Multi-AI task coordination. File locking, knowledge capture, drift detection. Rust.
subcog
Persistent memory system for AI coding assistants. Captures decisions, learnings, and context from coding sessions. Features hybrid search (semantic + BM25), MCP server integration, SQLite persistence with knowledge graph, and proactive memory surfacing. Written in Rust.