multi-llm-advisor
Scannednpx machina-cli add skill freitasp1/claude-code-skills/multi-llm-advisor --openclawMulti-LLM Advisor
This skill calls Codex 5.1 Pro and Gemini 3 Pro to provide additional perspectives.
When to Activate
- Architecture decisions (new features, refactoring)
- Code review (before commits, PRs)
- Debugging (complex errors, performance issues)
- On explicit request ("second opinion", "different perspective")
Usage
Use the multi-llm-advisor skill to get architecture feedback on [topic]
Use the multi-llm-advisor skill to review this code
Use the multi-llm-advisor skill to debug [issue]
Transparency Format
Every invocation displays the following:
+==============================================================+
| MULTI-LLM ADVISOR - [MODE: ARCHITECTURE|REVIEW|DEBUG] |
+==============================================================+
| CONTEXT SENT TO LLMs: |
| - Files: [list] |
| - Question: [prompt] |
| - Tokens: ~[count] |
+--------------------------------------------------------------+
| CODEX 5.1 PRO RESPONSE: |
| [response] |
+--------------------------------------------------------------+
| GEMINI 3 PRO RESPONSE: |
| [response] |
+--------------------------------------------------------------+
| SYNTHESIS (Claude's Recommendation): |
| [combined analysis] |
+==============================================================+
Prompt Templates
Architecture Mode
You are a senior software architect. Analyze this architecture decision:
CONTEXT:
{context}
QUESTION:
{question}
CURRENT STACK:
{stack}
Provide:
1. Pros/Cons of the proposed approach
2. Alternative approaches (max 2)
3. Potential risks and mitigations
4. Recommendation with reasoning
Be concise. Max 300 words.
Review Mode
You are a senior code reviewer. Review this code:
CODE:
{code}
LANGUAGE: {language}
PROJECT TYPE: {project_type}
Focus on:
1. Security vulnerabilities (OWASP Top 10)
2. Performance issues
3. Maintainability concerns
4. TypeScript/type safety (if applicable)
Format: Bullet points, max 200 words.
Debug Mode
You are a debugging expert. Analyze this issue:
ERROR/SYMPTOM:
{error}
RELEVANT CODE:
{code}
CONTEXT:
{context}
Provide:
1. Root cause analysis (most likely)
2. 2-3 diagnostic steps
3. Suggested fix with code
Be specific and actionable. Max 250 words.
API Configuration
Environment variables (store in .env or system env):
OPENAI_API_KEY- For Codex 5.1 ProGOOGLE_AI_API_KEY- For Gemini 3 Pro (same as gemini-image-gen)
Script Location
~/.claude/skills/multi-llm-advisor/advisor.ts
Hook Integration
Triggered via multi-llm-advisor-hook.ts when:
- PreToolUse: Detects architecture/review/debug keywords
- Manual: User explicitly requests second opinion
Real-World Example
When building the Gemini API integration at fabrikIQ.com, this skill helped decide between:
- Vertex AI (enterprise, EU region support) vs Google AI Studio (simpler)
- Streaming vs batch responses for large manufacturing datasets
- The synthesis recommended Vertex AI for GDPR compliance, which proved correct.
Source
git clone https://github.com/freitasp1/claude-code-skills/blob/main/skills/multi-llm-advisor/SKILL.mdView on GitHub Overview
The Multi-LLM Advisor fetches perspectives from Codex 5.1 Pro and Gemini 3 Pro to strengthen architecture decisions, code reviews, and debugging. It transparently displays all LLM calls using a standardized Transparency Format, so inputs, responses, and synthesis are traceable.
How This Skill Works
When activated by architecture decisions, code review, debugging, or explicit second opinion requests, the skill invokes Codex 5.1 Pro and Gemini 3 Pro with mode specific prompts (Architecture, Review, Debug). It then presents a synthesized result that includes each LLMs response and a final recommendation labeled as the synthesis.
When to Use It
- Architecture decisions (new features, refactoring)
- Code review requests before commits or PRs
- Debugging complex errors or performance issues
- On explicit request for a second opinion or different perspective
- When triangulating input across Codex and Gemini for architectural or design choices
Quick Start
- Step 1: Trigger the skill with a prompt like architecture decisions, code review, or debugging
- Step 2: Provide the relevant context, files, code snippets, or architecture description
- Step 3: Review the Transparency Format output and apply the synthesis recommendation
Best Practices
- Use the Transparency Format to review inputs and outputs from all LLMs
- Include relevant files and context sent to the LLMs for accuracy
- Run Architecture, Review, and Debug modes to triangulate advice
- Keep prompts concise to respect token limits and response quality
- Validate synthesis against project constraints and team standards
Example Use Cases
- Choosing between Vertex AI and Google AI Studio for an integration at fabrikIQ
- Deciding between streaming vs batch responses for large manufacturing datasets
- Synthesis recommended Vertex AI for GDPR compliance in a Gemini API integration
- Using Architecture mode to evaluate a new feature refactor with cross-LLM input
- Using Debug mode to triangulate root cause on a complex error with second opinion