AI-Tracker
本仓库旨在整理关于大语言模型(LLM)底层逻辑、**上下文工程 (Context Engineering)** 以及 **Model Context Protocol (MCP)** 协议的核心学习资源与实战路径。
claude mcp add --transport stdio twwch-ai-tracker node server.js \ --env LOG_LEVEL="info (optional)" \ --env MCP_CONFIG_PATH="path/to/mcp/config.json (optional)"
How to use
AI-Tracker is designed to act as an MCP server that coordinates tools, resources, and prompts to enable advanced context management and agent-like workflows. It leverages the MCP protocol to connect your LLM with external data sources and tooling, allowing you to fetch, store, and reason over assets such as local files, cloud assets, and scripted resources. The server is intended to be run alongside your LLM client so that requests can be mediated through the MCP layer, enabling structured tool invocation, resource retrieval, and prompt optimization across steps.
Once running, you can interact with AI-Tracker through its MCP-compliant endpoints to load and select relevant resources, issue tool calls, and manage long-running tasks. The three core MCP elements—Tools, Resources, and Prompts—are exposed to the LLM, which can request specific actions (e.g., read a file, query a remote asset, or execute a script) while the MCP layer handles context partitioning, caching, and isolation as needed. Use the inspector or debugging utilities described in the docs to monitor call lifecycles and ensure that prompts stay within the Goldilocks Zone for efficient token usage.
How to install
Prerequisites:
- Node.js installed (Recommended LTS version)
- NPM or PNPM for dependency management
- Access to the repository with MCP server code (or a ready-made server.js entry point)
Step-by-step:
-
Clone the repository
- git clone https://github.com/example/twwch-ai-tracker.git
- cd twwch-ai-tracker
-
Install dependencies
- npm install // or if using PNPM
- pnpm install
-
Configure MCP server (optional)
- Create a config file at ./config/mcp.config.json if your setup requires explicit paths or environment toggles
- Example: { "mcpServers": { "AI-Tracker": { "command": "node", "args": ["server.js"], "env": { "MCP_CONFIG_PATH": "/absolute/path/to/mcp/config.json", "LOG_LEVEL": "info" } } } }
-
Start the server
- npm run start
- or node server.js
-
Verify the server is running
- Check console output for a listening/ready message
- Send a basic MCP request to ensure tooling and resources load correctly
-
Integrate with your LLM client
- Point your MCP-enabled client at the server’s endpoint and begin issuing Tool/Resource/Prompt interactions as described in the MCP docs.
Additional notes
Tips:
- If you encounter token overflows, leverage the MCP Select/Compress/Isolate strategies to reduce prompt size and cache frequently used resources.
- Use the MCP Inspector or equivalent debugging tool to trace the lifecycle of calls and identify bottlenecks in resource loading or tool execution.
- Environment variables can control logging verbosity and paths to assets; keep sensitive data out of logs by using appropriate log levels and secret management.
- Ensure that file-based resources use absolute paths or well-defined relative paths to avoid mismatch in runtime environments (dev vs prod).
- Regularly update dependencies and pin versions to avoid breaking changes in MCP tooling.