SCAPO
š§ Reddit-powered AI optimization tips | Save your time and credits | can cover 380+ services | Real tips from real users
claude mcp add --transport stdio czero-cc-scapo python -m scapo \ --env LLM_PROVIDER="openrouter" \ --env OPENROUTER_MODEL="your_preferred_model_name" \ --env OPENROUTER_API_KEY="your_api_key_here" \ --env OPENROUTER_BASE_URL="https://openrouter.ai"
How to use
SCAPO is a community-driven knowledge base for AI service optimization. It analyzes Reddit discussions and transforms real-user experiences into actionable, service-specific tips and best practices. The project centers around two workflows: a Batch Processing via Service Discovery workflow that scrapes and caches service-related tips for downstream use, and a Legacy Sources Mode that relies on predefined sources to extract tips. Users typically deploy SCAPO to continuously gather and organize optimization guidance for services like ElevenLabs, HeyGen, and other AI providers, enabling teams to quickly access concrete, domain-specific techniques rather than generic prompts.
To use SCAPO, install the Python package and run the provided CLI subcommands. The most common commands are part of the scraping workflow:
- Discover services and cache them: scapo scrape discover --update
- Extract tips for a specific service: scapo scrape targeted --service "Service Name" --limit 20 --query-limit 20
- Batch process multiple services: scapo scrape batch --category <category> --limit 20 --batch-size 3
- Process all prioritized services: scapo scrape all --limit 20 --query-limit 20 --priority ultra
There is also a Legacy Sources mode that uses a sources.yaml configuration: scapo scrape run --sources reddit:LocalLLaMA --limit 10. You can view extracted tips via the interactive TUI (scapo tui) or by inspecting the generated markdown files under models/ as shown in the repository structure. The tool enables browser automation for scraping and relies on browser-based tooling for reliable data collection.
How to install
Prerequisites:
- Python 3.12 or newer
- git
- (Optional) uv (formerly Python virtual environment tooling) if you prefer virtualenv-based workflows
Step-by-step installation:
-
Clone the repository or install the package from PyPI if available git clone https://github.com/czero-cc/scapo.git cd scapo
-
Set up a Python environment and install dependencies
- If using uv: curl -LsSf https://astral.sh/uv/install.sh | sh uv venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install the package in editable mode (for local development): uv pip install -e .
-
Install browser automation dependencies if needed uv run playwright install
-
Prepare configuration for your LLM provider cp .env.example .env Edit .env to set: LLM_PROVIDER=openrouter OPENROUTER_API_KEY=your_api_key_here OPENROUTER_MODEL=your_preferred_model_name
-
Run the SCAPO CLI python -m scapo # or via uvx if you are using uv's runner
Note: The exact commands may vary slightly depending on whether you use a virtual environment manager (uv) or a standard Python setup. The key is to have Python 3.12+, install the package, configure your LLM provider, and install browser automation capabilities (Playwright) for scraping.
Additional notes
Tips and common issues:
- Ensure your LLM provider API keys and models are correctly configured in .env to avoid authentication errors.
- If you encounter token or context issues, run scapo update-context (for OpenRouter users) to refresh the token limits cache for better batch performance.
- The Legacy Sources mode requires a properly configured sources.yaml. If you start with Service Discovery, you can gradually integrate legacy sources.
- For browser automation, ensure the environment has Chromium/Playwright dependencies installed; use uv run playwright install if using uv.
- The output is organized under models/ with categories by service (e.g., models/audio/eleven-labs/). Review the generated *.md files for cost optimizations, pitfalls, and parameters.
- If you plan to deploy in production, consider setting environment variables explicitly in your deployment environment and enabling rate-limit safeguards for scraping workers.
Related MCP Servers
claude-historian
š An MCP server for conversation history search and retrieval in Claude Code
Pare
Dev tools, optimized for agents. Structured, token-efficient MCP servers for git, test runners, npm, Docker, and more.
fast-filesystem
A high-performance Model Context Protocol (MCP) server that provides secure filesystem access for Claude and other AI assistants.
kratos
šļø Memory System for AI Coding Tools - Never explain your codebase again. MCP server with perfect project isolation, 95.8% context accuracy, and the Four Pillars Framework.
work-memory
Never lose context again - persistent memory management system for AI-powered workflows across multiple tools
openapi -swagger
Solve AI context window limits for API docs | Convert any Swagger/OpenAPI to searchable MCP server | AI-powered endpoint discovery & code generation | Works with Cursor, Claude, VS Code