ai-notebook-lab
Repository containing practical exercises and notebooks focused on AI application development and experimentation.
claude mcp add --transport stdio jmautone-ai-notebook-lab python -m ai_notebook_lab.mcp.server \ --env MCP_LOG_LEVEL="INFO" \ --env MCP_SERVER_PORT="5000"
How to use
AI Notebook Lab provides a hands-on environment for exploring modern AI workflows, including LLM prompting, vector databases, RAG pipelines, agents and tools, and MCP-based distributed architectures. The MCP server component in this repository enables communication between the model runtime and external tools or services via a standardized protocol. You can use the local MCP server to run and orchestrate micro-tools defined for the labs, feed them with context from notebooks, and observe the JSON-RPC style exchanges, streaming events, and tool results as part of a larger demonstration of Model Context Protocol in practice. This server is designed to integrate with the lab modules, allowing you to experiment with tool invocation, tool chaining, and state management as you develop complex AI-assisted workflows.
How to install
Prerequisites:
- Python 3.8 or newer
- Git
- Virtual environment support (optional but recommended)
- Clone the repository
- git clone https://github.com/your-org/ai-notebook-lab.git
- cd ai-notebook-lab
- Create and activate a virtual environment
- python -m venv venv
- For Linux/macOS: source venv/bin/activate
- For Windows: venv\Scripts\activate
- Install dependencies
- pip install -r requirements.txt
- Run the MCP server (local MCP server for Lab 6)
- python -m ai_notebook_lab.mcp.server
- Verify the server is listening (default port 5000)
- You should see output indicating MCP server startup and listening on port 5000.
Additional notes
Tips and considerations:
- Environment variables:
- MCP_LOG_LEVEL controls verbosity (DEBUG, INFO, WARN, ERROR).
- MCP_SERVER_PORT can be changed if you have port conflicts.
- If you encounter missing dependencies, ensure your virtual environment is activated and that requirements.txt is up to date.
- The local MCP server integrates with the Lab 6 MCP components and expects the standard MCP tool definitions used across the labs; ensure the lab tools are correctly wired into the MCP configuration.
- When testing tool invocations, inspect the server logs for JSON-RPC requests, nature of tool calls, and any errors in tool execution.
- If running behind a firewall or proxy, make sure to expose the port used by the MCP server or configure SSE/JSON-RPC endpoints accordingly.
Related MCP Servers
ai-guide
程序员鱼皮的 AI 资源大全 + Vibe Coding 零基础教程,分享大模型选择指南(DeepSeek / GPT / Gemini / Claude)、最新 AI 资讯、Prompt 提示词大全、AI 知识百科(RAG / MCP / A2A)、AI 编程教程、AI 工具用法(Cursor / Claude Code / OpenClaw / TRAE / Lovable / Agent Skills)、AI 开发框架教程(Spring AI / LangChain)、AI 产品变现指南,帮你快速掌握 AI 技术,走在时代前沿。本项目为开源文档版本,已升级为鱼皮 AI 导航网站
learn-ai-engineering
Learn AI and LLMs from scratch using free resources
sre
The SmythOS Runtime Environment (SRE) is an open-source, cloud-native runtime for agentic AI. Secure, modular, and production-ready, it lets developers build, run, and manage intelligent agents across local, cloud, and edge environments.
weam
Web app for teams of 20+ members. In-built connections to major LLMs via API. Share chats, prompts, and agents in team or private folders. Modern, fully responsive stack (Next.js, Node.js). Deploy your own vibe-coded AI apps, agents, or workflows—or use ready-made solutions from the library.
quarkus-workshop-langchain4j
Quarkus Langchain4J Workshop
inAI-wiki
🌍 The open-source Wikipedia of AI — 2M+ apps, agents, LLMs & datasets. Updated daily with tools, tutorials & news.