Prompt-Optimizer
The prompt linter for LLM applications. Score, analyze, and standardize prompt quality. Open-source MCP server + Node.js API.
claude mcp add --transport stdio rishiatlan-prompt-optimizer-mcp npx -y pcp-engine \ --env PCP_FREE_TIER="true" \ --env PCP_LOG_LEVEL="info"
How to use
Prompt-Optimizer (PCP Engine) is an MCP server that analyzes, scores, enforces policy, and provides a guided workflow for refining prompts. It can classify prompts, assess risk, route to appropriate models, and prepare a compiled, policy-compliant prompt along with a structured audit trail. The built-in commands cover a full cycle from quick quality checks to an end-to-end optimization pipeline. Typical usage includes running a preflight to understand risks, then using optimize to produce a surfaced PreviewPack for approval, and finally enforcing the finalized prompt through the chosen model route. This makes it suitable for teams that want deterministic prompt quality and governance without leaking into the LLMs themselves.
To use the available tools, install the PCP Engine CLI (pcp-engine) and run commands such as preflight, optimize, check, score, cost, and benchmark. Preflight classifies the prompt, scores 5 dimensions, and surfaces blocking questions if needed. Optimize executes the full pipeline: analyze, compile, surface blocking questions, and produce a PreviewPack for approvals. The check command provides a quick quality score and top issues, while score gives a detailed 5-dimension breakdown. Cost estimates token usage across supported models, and benchmark runs regression tests to ensure stability. The workflow supports human-in-the-loop approvals, where blocking questions must be answered before finalizing the compiled prompt.
In CI or GitHub Actions, you can wire PCP to run preflight or optimize as part of your quality gates, ensuring prompts meet your governance criteria before deployment or release.
How to install
Prerequisites:
- Node.js 18+ and npm installed on your system
- Access to npm to install the PCP Engine CLI
Installation steps:
-
Install the PCP Engine CLI globally (recommended for quick start):
npm install -g pcp-engine
-
Verify installation:
pcp --version pcp preflight "Your prompt here" --json
-
Optional: if you prefer to run without global installation, you can use npx directly (as in the example below):
npx -y pcp-engine preflight "Your prompt here" --json
-
Review and customize environment variables as needed (see additional_notes for details):
- PCP_LOG_LEVEL: controls verbosity (debug|info|warn|error)
- PCP_FREE_TIER: enable/disable free tier features for testing
Additional notes
Tips and common considerations:
- The CLI commands mirror the MCP workflow: preflight, optimize, check, score, cost, benchmark. Use preflight for quick risk assessment and optimization for full pipeline results.
- The free tier typically provides limited optimizations per month; monitor usage if you’re evaluating scale.
- If running in CI, consider using the --json flag to get structured output for parsing in scripts.
- Environment variables can tune behavior (logging, tier limits). In production, securely manage any sensitive values the MCP may reference.
- When integrating with GitHub Actions or CI, pin the PCP Engine version to avoid unexpected changes, and ensure actions/checkout is configured for file access during prompts processing.
Related MCP Servers
mcp-telegram
MCP Server for Telegram
recall
Persistent cross-session memory for Claude & AI agents. Self-host on Redis/Valkey, or use the managed SaaS at recallmcp.com.
claude-code-open
Open source AI coding platform with Web IDE, multi-agent system, 37+ tools, MCP protocol. MIT licensed.
lc2mcp
Convert LangChain tools to FastMCP tools
boilerplate
TypeScript Model Context Protocol (MCP) server boilerplate providing IP lookup tools/resources. Includes CLI support and extensible structure for connecting AI systems (LLMs) to external data sources like ip-api.com. Ideal template for creating new MCP integrations via Node.js.
vscode
MCP server for Claude Code/VSCode/Cursor/Windsurf to use editor self functionality. ⚡ Get real-time LSP diagnostics, type information, and code navigation for AI coding agents without waiting for slow tsc/eslint checks.