shippie
extendable code review and QA agent 🚢
claude mcp add --transport stdio mattzcarey-shippie npx -y shippie review
How to use
Shippie is a command-line tool that uses large language models to review code in your CI/CD pipeline and can also run locally to review staged files. When integrated as an MCP client, it exposes its capabilities to external tooling and environments in a way that other tools can orchestrate. You can invoke Shippie in your pipeline to automatically scan for exposed secrets, inefficient code paths, and potential bugs, helping you ship code faster with automated review assistance. Locally, you can run Shippie to quickly analyze changes before committing or pushing, acting as a human-like code reviewer powered by LLMs.
Key capabilities include running in CI/CD contexts, integrating seamlessly into pipelines, and using a modular set of rules and AI provider configurations. The MCP integration enables external tools to request Shippie’s analysis or leverage its outcomes through MCP-compatible tooling, with configurable AI providers and action options. See the project’s setup and MCP documentation for configuring providers, rules, and how to wire Shippie into other tools that can request external tool access or monitoring oversight during code review.
How to install
Prerequisites:
- Node.js (or Bun for faster startup) or an environment that can run npx/shippie
- Access to an OpenAI API key (or another AI provider as configured)
Installation steps:
-
Clone the repository: git clone https://github.com/mattzcarey/shippie.git cd shippie
-
Install dependencies (the project supports Bun, or fallback to npm/pnpm):
- If using Bun: bun i
- If using npm/pnpm:
npm install
or
pnpm install
-
Set up API key:
- Copy or rename .env.example to .env
- Insert your OPENAI_API_KEY value in the .env file
-
Start the application:
- If using Bun: bun start
- If using npm/pnpm:
npm run start
or
pnpm run start
-
Verify the server is running and consult the package.json for available scripts and commands.
Additional notes
Environment variables and configuration:
- OPENAI_API_KEY: Your OpenAI API key or credentials for the configured AI provider.
- Review provider configuration in docs/ai-provider-config.md to customize AI providers.
- Rules and MCP integration: See docs/mcp.md for how Shippie exposes MCP endpoints and how external tools can request analysis or access external capabilities. Common issues:
- If you encounter API key errors, ensure .env is correctly named and loaded in your runtime environment.
- When running in CI, ensure the runner has network access to the AI provider and necessary environment variables.
- If using Bun, remember bun i (install) and bun start; for npm/pnpm, ensure dependencies are installed before starting.
MCP-related tips:
- Use the provided MCP docs to configure how Shippie participates with external tools—this includes how to pass context, results, and logs back to orchestrators.
- Keep rules files and AI provider configurations up to date to maximize review quality.
Related MCP Servers
aser
Aser is a lightweight, self-assembling AI Agent frame.
mcp
A collection of Model Context Protocol (MCP) servers, clients and developer tools by IBM.
time
⏰ Time MCP Server: Giving LLMs Time Awareness Capabilities
mindbridge
MindBridge is an AI orchestration MCP server that lets any app talk to any LLM — OpenAI, Anthropic, DeepSeek, Ollama, and more — through a single unified API. Route queries, compare models, get second opinions, and build smarter multi-LLM workflows.
review-flow
Automated AI code reviews powered — webhook-driven, real-time dashboard, MCP integration, smart queue with deduplication, multi-agent audits, and iterative follow-up reviews for GitLab MRs and GitHub PRs
titanmind-whatsapp
A WhatsApp marketing and messaging tool MCP (Model Control Protocol) service using Titanmind. Handles free-form messages (24hr window) and template workflows automatically