learn-ai-engineering
Learn AI and LLMs from scratch using free resources
claude mcp add --transport stdio ashishps1-learn-ai-engineering docker run -i ashishps1/learn-ai-engineering-mcp \ --env MCP_PORT="8080 (default inside container, override if needed)" \ --env MCP_LOG_LEVEL="info|debug|warn|error"
How to use
The Learn AI Engineering MCP server provides a centralized interface to interact with a curated collection of AI/ML learning resources. It uses the Model Context Protocol to organize resources, prompts, and tools so you can query, retrieve, and reason over the included content as if you were asking an intelligent agent. Users can request structured study paths, fetch relevant tutorials, and explore related topics (from mathematics foundations to deep learning frameworks) in a context-aware way. The server exposes an MCP-compatible endpoint that accepts context-rich queries and returns responses designed to help you plan your learning journey, assemble personalized curricula, and discover connections between topics such as linear algebra foundations and practical deep learning implementations. Tools available typically include retrieval over the resource set, contextual prompts, and guided reasoning chains to help you synthesize the material into actionable steps.
How to install
Prerequisites:
- Docker installed and running on your system
- Basic command-line familiarity
Step 1: Pull and run the MCP server container
-
Ensure Docker is running
-
Start the MCP server in detached mode (adjust image tag if needed)
docker pull ashishps1/learn-ai-engineering-mcp docker run -d --name learn-ai-engineering-mcp -p 8080:8080 ashishps1/learn-ai-engineering-mcp
Step 2: Configure environment variables (optional)
-
You can override defaults by passing environment variables to the container:
docker run -d --name learn-ai-engineering-mcp -p 8080:8080
-e MCP_PORT=8080
-e MCP_LOG_LEVEL=info
ashishps1/learn-ai-engineering-mcp
Step 3: Verify the server is running
-
Open a browser or use curl to test the endpoint (adjust host/port if you mapped differently):
Step 4: Use MCP client tooling
- Use any MCP-compliant client to interact with the server. Typical flow involves sending a request with a structured prompt and context, and receiving a context-aware response that references the provided resource set.
Additional notes
Tips and notes:
- If you don’t see expected resources, check logs from the container to ensure the resource index is loaded properly.
- The MCP server may expose endpoints for health, prompts, and context-based queries; use /health to verify uptime and /prompt to send queries.
- Environment variables can customize logging and port mapping; keep MCP_PORT aligned with your docker run port mapping.
- For production use, consider binding a domain and enabling TLS, plus setting a stable storage or mount for resource indexes if the container manages local data.
- If you encounter network issues, ensure Docker networking allows localhost access on the chosen port and that no other service is occupying the port.
- Update strategy: pull latest image periodically to incorporate new resources and improvements.
Related MCP Servers
git
Put an end to code hallucinations! GitMCP is a free, open-source, remote MCP server for any GitHub project
mcp-client-for-ollama
A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Built for developers working with local LLMs.
sdk-typescript
A model-driven approach to building AI agents in just a few lines of code.
aser
Aser is a lightweight, self-assembling AI Agent frame.
decipher-research-agent
Turn topics, links, and files into AI-generated research notebooks — summarize, explore, and ask anything.
neurolink
Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.