Get the FREE Ultimate OpenClaw Setup Guide →

SCAPO

🧘 Reddit-powered AI optimization tips | Save your time and credits | can cover 380+ services | Real tips from real users

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio czero-cc-scapo python -m scapo \
  --env LLM_PROVIDER="openrouter" \
  --env OPENROUTER_MODEL="your_preferred_model_name" \
  --env OPENROUTER_API_KEY="your_api_key_here" \
  --env OPENROUTER_BASE_URL="https://openrouter.ai"

How to use

SCAPO is a community-driven knowledge base for AI service optimization. It analyzes Reddit discussions and transforms real-user experiences into actionable, service-specific tips and best practices. The project centers around two workflows: a Batch Processing via Service Discovery workflow that scrapes and caches service-related tips for downstream use, and a Legacy Sources Mode that relies on predefined sources to extract tips. Users typically deploy SCAPO to continuously gather and organize optimization guidance for services like ElevenLabs, HeyGen, and other AI providers, enabling teams to quickly access concrete, domain-specific techniques rather than generic prompts.

To use SCAPO, install the Python package and run the provided CLI subcommands. The most common commands are part of the scraping workflow:

  • Discover services and cache them: scapo scrape discover --update
  • Extract tips for a specific service: scapo scrape targeted --service "Service Name" --limit 20 --query-limit 20
  • Batch process multiple services: scapo scrape batch --category <category> --limit 20 --batch-size 3
  • Process all prioritized services: scapo scrape all --limit 20 --query-limit 20 --priority ultra

There is also a Legacy Sources mode that uses a sources.yaml configuration: scapo scrape run --sources reddit:LocalLLaMA --limit 10. You can view extracted tips via the interactive TUI (scapo tui) or by inspecting the generated markdown files under models/ as shown in the repository structure. The tool enables browser automation for scraping and relies on browser-based tooling for reliable data collection.

How to install

Prerequisites:

  • Python 3.12 or newer
  • git
  • (Optional) uv (formerly Python virtual environment tooling) if you prefer virtualenv-based workflows

Step-by-step installation:

  1. Clone the repository or install the package from PyPI if available git clone https://github.com/czero-cc/scapo.git cd scapo

  2. Set up a Python environment and install dependencies

    • If using uv: curl -LsSf https://astral.sh/uv/install.sh | sh uv venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
    • Install the package in editable mode (for local development): uv pip install -e .
  3. Install browser automation dependencies if needed uv run playwright install

  4. Prepare configuration for your LLM provider cp .env.example .env Edit .env to set: LLM_PROVIDER=openrouter OPENROUTER_API_KEY=your_api_key_here OPENROUTER_MODEL=your_preferred_model_name

  5. Run the SCAPO CLI python -m scapo # or via uvx if you are using uv's runner

Note: The exact commands may vary slightly depending on whether you use a virtual environment manager (uv) or a standard Python setup. The key is to have Python 3.12+, install the package, configure your LLM provider, and install browser automation capabilities (Playwright) for scraping.

Additional notes

Tips and common issues:

  • Ensure your LLM provider API keys and models are correctly configured in .env to avoid authentication errors.
  • If you encounter token or context issues, run scapo update-context (for OpenRouter users) to refresh the token limits cache for better batch performance.
  • The Legacy Sources mode requires a properly configured sources.yaml. If you start with Service Discovery, you can gradually integrate legacy sources.
  • For browser automation, ensure the environment has Chromium/Playwright dependencies installed; use uv run playwright install if using uv.
  • The output is organized under models/ with categories by service (e.g., models/audio/eleven-labs/). Review the generated *.md files for cost optimizations, pitfalls, and parameters.
  • If you plan to deploy in production, consider setting environment variables explicitly in your deployment environment and enabling rate-limit safeguards for scraping workers.

Related MCP Servers

Sponsor this space

Reach thousands of developers ↗