infobus
Model Context Protocol server enabling AI assistants to access transit information through standardized interfaces
claude mcp add --transport stdio simovilab-infobus-mcp node server.js \ --env PORT="9000 (or the port configured in your environment)" \ --env MCP_ENDPOINT="URL or path for MCP API (optional)"
How to use
The infobus MCP server provides a standardized interface for AI assistants and chatbots to access transit information. It exposes tools for trip planning, real-time status queries, schedules, and natural language transit assistance, enabling seamless integration with LLMs and conversational AI platforms. Once running, you can query capabilities such as route planning between stops, live vehicle positions, arrival predictions, and transit service alerts through the MCP protocol endpoints. The server is designed to be plug-and-play with common MCP tooling, allowing you to connect your AI agent to transit data without bespoke integration code.
How to install
Prerequisites:
- Node.js (LTS recommended, >= 14)
- Git
- Access to the MCP server repository (clone or download)
Installation steps:
-
Clone the repository git clone <repository-url> simovilab-infobus-mcp cd simovilab-infobus-mcp
-
Install dependencies npm install
-
Configure environment (optional, provide if needed by your deployment)
- Create a .env file or export environment variables required by the server, for example: PORT=9000 MCP_ENDPOINT=http://localhost:9000/mcp
-
Run the MCP server npm run start
-
Verify the server is running
- Check logs in the console
- Send a test MCP request to the configured endpoint to ensure responsiveness
Note: If you prefer containerized deployment, you can build a Docker image from this repository and run it with appropriate port mappings and environment variables.
Additional notes
Tips and common issues:
- Ensure the port you expose is not blocked by your firewall and is accessible by your AI agents.
- If you encounter missing dependencies, run npm install again or check Node.js version compatibility.
- If environment variables are required (e.g., MCP_ENDPOINT, API keys for transit data providers), document and load them via a .env file or your deployment platform's secret management.
- Review MCP protocol endpoints to confirm available tools (trip planning, real-time status, arrival predictions) and update your agent's tool usage accordingly.
- For production deployments, consider using a process manager (like PM2) and enabling proper logging and monitoring for uptime and debugging.
- If you switch to a containerized deployment, ensure the container has network access to any external transit data services and that volume mounts or config maps provide any necessary configuration files.
Related MCP Servers
ebook
A MCP server that supports mainstream eBook formats including EPUB, PDF and more. Simplify your eBook user experience with LLM.
python -client
支持查询主流agent框架技术文档的MCP server(支持stdio和sse两种传输协议), 支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai
mcp-gateway
MCP Gateway and Registry
mcp -email
一个基于 MCP (Model Context Protocol) 的邮件服务,支持 LLM 发送带附件的电子邮件及在指定目录中搜索文件。提供安全的 SMTP 传输、多收件人支持和附件模式匹配搜索功能,适用于 Gmail、Outlook、Yahoo、QQ 邮箱和网易 126 邮箱等主流邮箱服务。
pentestMCP
pentestMCP: AI-Powered Penetration Testing via MCP, an MCP designed for penetration testers.
MCPSecBench
MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols