context-engineering
đź§ Stop building AI that forgets. Master MCP (Model Context Protocol) with production-ready semantic memory, hybrid RAG, and the WARNERCO Schematica teaching app. FastMCP + LangGraph + Vector/Graph stores. Your AI assistant's long-term memory starts here.
claude mcp add --transport stdio timothywarner-org-context-engineering uv run warnerco-mcp
How to use
This MCP server implements the WARNERCO Schematica pattern used in the Context Engineering course. It runs a production-style MCP backend built with Python, FastAPI, FastMCP, and LangGraph to support a semantic memory layer for AI assistants. The server exposes an MCP-compatible interface that coordinates a 7-node hybrid RAG pipeline, memory stores (JSON, Graph, Scratchpad, and more), and the orchestration required for contextual reasoning and memory retrieval. You can interact with the server via Claude Desktop/Claude Code integrations, or test locally with the MCP Inspector to verify endpoints, prompts, and data flow.
To use the server, start it with the command provided in the MCP client configuration (uv run warnerco-mcp). Once running, you can connect Claude-based tools or the MCP Inspector to explore prompts, memory retrieval, and reasoning steps. The FastAPI + FastMCP combination handles HTTP/RPC communication, while LangGraph manages the prompt engineering and graph-based memory flow. You’ll typically test memory retrieval, scratchpad usage, and graph-based reasoning as part of your production workflow.
How to install
Prerequisites:
- Python 3.11+ (3.12+ recommended for WARNERCO Schematica)
- Node.js 20+ (for Lab 01 and MCP Inspector)
- uv (Python package manager for the backend)
- Claude Desktop or Claude Code for MCP tooling
-
Clone the repository: git clone https://github.com/timothywarner-org/context-engineering.git cd context-engineering
-
Install and set up the server (Python/uv):
-
Ensure Python 3.11+ is installed and available on PATH
-
Create a virtual environment and install dependencies (adjust paths if needed): python -m venv venv source venv/bin/activate # on macOS/Linux
Windows: .\venv\Scripts\activate
pip install -r requirements.txt # if a requirements file exists for the backend
-
Test that uv is available and install FastAPI/FastMCP-related packages if needed: pip install fastapi fastmcp langgraph
-
-
Run the MCP server (as per CLAUDE.md guidance): uv run warnerco-mcp # from the backend directory: src/warnerco/backend
-
Optional: run MCP Inspector to test endpoints: npx @modelcontextprotocol/inspector uv run warnerco-mcp
-
If you will work with Lab 01 or other parts:
- For Lab 01 (Hello MCP): install dependencies with npm and start the lab UI as described in the Quick Start section.
- For other components, follow the repository’s CLAUDE.md and lab-specific instructions.
Prerequisites recap are listed in the Quick Start section of the repo README. Adjust paths to your local environment as needed.
Additional notes
Tips and considerations:
- The WARNERCO Schematica app is designed for production-style MCP usage with a 7-node hybrid RAG pipeline. Be mindful of environment-specific memory and latency requirements in production.
- If you see Python 3.12-specific WARNs, consider using Python 3.11+ or follow the project’s guidance for 3.12+ in the WARNERCO context.
- The configuration shown in MCP client examples uses uv to run the backend. You can adapt the path (cwd) to your local clone if you move directories.
- For Claude Desktop integration, ensure the correct mcpServers entry matches the backend command and working directory so the tool can connect to the MCP endpoint.
- When testing with MCP Inspector, you can validate prompts, memory retrieval, and graph-based reasoning flows before deploying to production.
Related MCP Servers
awesome-agent-skills
A curated list of skills, tools, tutorials, and capabilities for AI coding agents (Claude, Codex, Antigravity, Copilot, VS Code)
claude-emporium
🏛 [UNDER CONSTRUCTION] A (roman) claude plugin marketplace
codebase-context
Local-first Second brain for AI agents working on your codebase - detects your team coding conventions and patterns, brings in persistent memory, code-generation checks, and hybrid search with evidence scoring. Exposed through CLI and MCP server.
context-lens
Semantic search knowledge base for MCP-enabled AI assistants. Index local files or GitHub repos, query with natural language. Built on LanceDB vector storage. Works with Claude Desktop, Cursor, and other MCP clients.
automagik-tools
From API to AI in 30 Seconds - Transform any API into an intelligent MCP agent that learns, adapts, and speaks human
ctxvault
Local memory infrastructure for AI agents. Isolated vaults you compose, control, monitor and query — no cloud, no setup.