mcp-skills
npx machina-cli add skill bjornslib/mcp-to-uber-skills-converter/templates --openclawMCP Skills Registry
Central directory for all MCP-derived skills. Each sub-skill wraps an MCP server with progressive disclosure.
Available Skills
| Skill | Tools | Trigger Keywords |
|---|
See index.json for the machine-readable list.
⚠️ These Are Skill Wrappers, NOT Native MCP Tools
You CANNOT call mcp__shadcn__*, mcp__github__*, etc. directly - those don't exist.
These skills wrap MCP servers via a central executor.py.
Usage
Step 1: Read the skill's SKILL.md (from project root):
cat .claude/skills/mcp-skills/<skill-name>/SKILL.md
# Example: shadcn skill
cat .claude/skills/mcp-skills/shadcn/SKILL.md
Step 2: Use the central executor.py (from project root):
# List available skills
python .claude/skills/mcp-skills/executor.py --skills
# List tools in a skill
python .claude/skills/mcp-skills/executor.py --skill github --list
# Get tool schema
python .claude/skills/mcp-skills/executor.py --skill github --describe create_issue
# Call a tool
python .claude/skills/mcp-skills/executor.py --skill github --call '{"tool": "create_issue", "arguments": {...}}'
Context Efficiency
| Scenario | Native MCP (all servers) | This Registry | Savings |
|---|---|---|---|
| Idle | 40-100k tokens | ~150 tokens | 99%+ |
| Using 1 skill | 40-100k tokens | ~5k tokens | 90%+ |
| After execution | 40-100k tokens | ~150 tokens | 99%+ |
How It Works
- Registry loads first - This file (~150 tokens)
- User requests a tool - e.g., "create a GitHub PR"
- Sub-skill loads - Only the relevant skill's SKILL.md (~4k tokens)
- Executor runs - External process, 0 context tokens
- Result returned - Context drops back to registry only
Adding New Skills
Use the mcp-to-skill-converter skill:
cd .claude/skills/mcp-to-skill-converter
python mcp_to_skill.py --name <server-name>
# Outputs to .claude/skills/mcp-skills/<server-name>/
Skill Structure
Each sub-skill contains:
.claude/skills/mcp-skills/<skill-name>/
├── SKILL.md # Tool documentation
├── executor.py # Async MCP client (legacy, use central executor)
├── mcp-config.json # Server config
└── package.json # Dependencies
This registry enables progressive disclosure of MCP servers as Claude Skills.
Source
git clone https://github.com/bjornslib/mcp-to-uber-skills-converter/blob/main/skills/mcp-to-skill-converter/templates/registry-SKILL.mdView on GitHub Overview
MCP-skills is the central directory for all MCP-derived skills. Each sub-skill wraps an MCP server through a central executor.py, enabling progressive disclosure and substantial context savings (90%+ vs native MCP). It helps you access MCP-derived tools like github, assistant-ui, and MCP tools without loading complete servers.
How This Skill Works
The registry loads first (about 150 tokens). When you request a tool, the relevant sub-skill is loaded (around 4k tokens). The executor runs the external MCP process and returns results, while keeping the registry context available for subsequent requests.
When to Use It
- Asked about the GitHub tool or MCP tools and you want a quick, low-context answer.
- You need to explore MCP-derived servers like assistant-ui without loading all MCP servers.
- You want to list or describe a sub-skill's tool schema before calling it.
- You want to compare multiple MCP sub-skills with minimal context.
- You intend to call a specific tool via the central executor.
Quick Start
- Step 1: Read the skill's SKILL.md (from project root): cat .claude/skills/mcp-skills/<skill-name>/SKILL.md
- Step 2: Use the central executor.py to list skills: python .claude/skills/mcp-skills/executor.py --skills
- Step 3: Call a tool: python .claude/skills/mcp-skills/executor.py --skill github --call '{"tool": "create_issue", "arguments": {...}}'
Best Practices
- Always read the sub-skill's SKILL.md before calling any tool.
- Use the central executor.py for all calls instead of direct MCP calls.
- List available skills first to understand what tools exist.
- Pass tool calls as a JSON payload to --call (e.g., {'tool': 'create_issue', 'arguments': {...}}).
- Rely on progressive disclosure to minimize context load; only load needed sub-skills.
Example Use Cases
- Ask the registry to create a GitHub issue via the github sub-skill.
- Explore MCP UI tooling via the assistant-ui sub-skill.
- List tools in a skill with python .claude/skills/mcp-skills/executor.py --skills.
- Describe a tool's schema with --describe and then call it with --call.
- Call a tool and observe reduced context tokens due to progressive loading.