llm-council
Scannednpx machina-cli add skill gcpdev/llm-council-skill/llm-council --openclawLLM Council
Consult multiple AI models (ChatGPT and Gemini) for their perspectives before presenting implementation plans to users.
Workflow
When user requests consultation with other AI models, use phrases like:
- "Consult with ChatGPT and Gemini about..."
- "Ask other AI models what they think about..."
- "Get perspectives from the council on..."
- "Consult the LLM council: [your question]"
Process:
- Query external LLMs: Run
scripts/query_llms.pywith the user's prompt to get perspectives from both ChatGPT and Gemini - Analyze responses: Review what each model suggests, identifying valuable insights, alternative approaches, and potential concerns
- Synthesize plan: Create an implementation plan that incorporates the best ideas from all three models (Claude's own analysis + ChatGPT + Gemini)
- Present to user: Show the final plan along with a brief summary of key contributions from each model
Setup Requirements
The skill requires API keys and optional model configuration stored in a .env file in the working directory:
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=...
# Optional: Specify which models to use (defaults shown below)
OPENAI_MODEL=gpt-5-nano
GEMINI_MODEL=gemini-3-flash-preview
Default Models:
- ChatGPT:
gpt-5-nano(fastest, most cost-efficient - $0.05/1M input, $0.40/1M output) - Gemini:
gemini-3-flash-preview(balanced speed and intelligence)
Upgrade Options for Better Collaboration:
OpenAI models (ordered by capability and cost):
gpt-5-nano- Fastest, most cost-efficient ($0.05/1M in, $0.40/1M out) - DEFAULTgpt-5-mini- Faster, cost-efficient for well-defined tasks ($0.25/1M in, $2.00/1M out)gpt-5.2- Best for coding and agentic tasks ($1.75/1M in, $14.00/1M out)gpt-5.2-pro- Smarter, more precise for complex problems ($21.00/1M in, $168.00/1M out)
All models support reasoning tokens, 400K context window, and image input.
Gemini models (ordered by capability):
gemini-2.5-flash-lite- Ultra-fast, optimized for throughputgemini-2.5-flash- Best price-performance, large-scale processinggemini-3-flash-preview- Balanced speed and frontier intelligence (default)gemini-3-pro-preview- Most intelligent multimodal model, best for complex reasoning
Higher-tier models provide more sophisticated analysis but cost more per API call.
If the .env file doesn't exist or keys are missing, inform the user and provide setup instructions.
Usage Example
User input: "Consult the council: How should I architect a real-time data pipeline for IoT sensors?"
Claude's process:
- Execute:
python3 scripts/query_llms.py "How should I architect a real-time data pipeline for IoT sensors?" - Parse JSON responses from ChatGPT and Gemini
- Analyze their suggestions (e.g., ChatGPT suggests Kafka, Gemini recommends considering edge computing)
- Synthesize final plan incorporating valuable insights from all models
- Present the adapted plan to user with attribution
Output Format
Present the final implementation plan naturally, mentioning key insights from other models inline where relevant. For example:
"Based on consultation with ChatGPT and Gemini, here's the recommended architecture:
[Implementation plan with inline references like "ChatGPT highlighted the importance of..." or "Gemini suggested..."]
Key contributions:
- ChatGPT: [brief summary]
- Gemini: [brief summary]"
Error Handling
- If API keys are missing, inform user and provide setup instructions
- If an API call fails, note which model's perspective is unavailable and proceed with available responses
- If both APIs fail, inform user and offer to provide Claude's own analysis without external consultation
Source
git clone https://github.com/gcpdev/llm-council-skill/blob/main/llm-council/SKILL.mdView on GitHub Overview
LLM Council coordinates consultations with multiple AI models (such as ChatGPT and Gemini) before presenting an implementation plan. It queries external LLM APIs, synthesizes their perspectives, and delivers an adapted plan with attributions. Use this when you explicitly want diverse AI insights prior to a decision.
How This Skill Works
On user request, it runs scripts/query_llms.py with the prompt to gather perspectives from the selected models. It analyzes each model's suggestions to identify valuable insights, alternative approaches, and potential concerns, then blends them with Claude's own analysis into a cohesive implementation plan. The final plan is shown to the user with concise attributions to each model.
When to Use It
- The user explicitly requests consultation with multiple AI models before an implementation plan.
- Prompt includes phrases like consult the council, ask other models, or get perspectives from other AIs.
- You need diverse reasoning for high-stakes architecture, strategy, or policy decisions.
- You want to surface alternative approaches or concerns that a single model might miss.
- You require an attribution-rich final plan showing contributions from each model.
Quick Start
- Step 1: Confirm you want a council consultation and capture the user prompt.
- Step 2: Run python3 scripts/query_llms.py '<your prompt>' to fetch perspectives from ChatGPT and Gemini.
- Step 3: Synthesize insights and present the adapted plan with model attributions.
Best Practices
- Require explicit user prompt to trigger a council consultation and capture the question.
- Ensure API keys and model configuration exist in a .env file; provide clear setup steps if missing.
- Document which models are queried and in what order to keep expectations clear.
- Review each model's input/output for biases, conflicts, or gaps, then reconcile in the synthesis.
- Present the final plan with inline attributions to each model and your own analysis.
Example Use Cases
- Architect a real-time data pipeline after council consultation.
- Compare cloud-native vs on-prem data storage strategies with council input.
- Draft an AI governance framework by aggregating perspectives from multiple models.
- Plan a scalable microservices architecture with cross-model insights.
- Define an encryption and security policy after council consultation.