Confidence Check
Scannednpx machina-cli add skill Microck/ordinary-claude-skills/confidence-check --openclawConfidence Check Skill
Purpose
Prevents wrong-direction execution by assessing confidence BEFORE starting implementation.
Requirement: ≥90% confidence to proceed with implementation.
Test Results (2025-10-21):
- Precision: 1.000 (no false positives)
- Recall: 1.000 (no false negatives)
- 8/8 test cases passed
When to Use
Use this skill BEFORE implementing any task to ensure:
- No duplicate implementations exist
- Architecture compliance verified
- Official documentation reviewed
- Working OSS implementations found
- Root cause properly identified
Confidence Assessment Criteria
Calculate confidence score (0.0 - 1.0) based on 5 checks:
1. No Duplicate Implementations? (25%)
Check: Search codebase for existing functionality
# Use Grep to search for similar functions
# Use Glob to find related modules
✅ Pass if no duplicates found ❌ Fail if similar implementation exists
2. Architecture Compliance? (25%)
Check: Verify tech stack alignment
- Read
CLAUDE.md,PLANNING.md - Confirm existing patterns used
- Avoid reinventing existing solutions
✅ Pass if uses existing tech stack (e.g., Supabase, UV, pytest) ❌ Fail if introduces new dependencies unnecessarily
3. Official Documentation Verified? (20%)
Check: Review official docs before implementation
- Use Context7 MCP for official docs
- Use WebFetch for documentation URLs
- Verify API compatibility
✅ Pass if official docs reviewed ❌ Fail if relying on assumptions
4. Working OSS Implementations Referenced? (15%)
Check: Find proven implementations
- Use Tavily MCP or WebSearch
- Search GitHub for examples
- Verify working code samples
✅ Pass if OSS reference found ❌ Fail if no working examples
5. Root Cause Identified? (15%)
Check: Understand the actual problem
- Analyze error messages
- Check logs and stack traces
- Identify underlying issue
✅ Pass if root cause clear ❌ Fail if symptoms unclear
Confidence Score Calculation
Total = Check1 (25%) + Check2 (25%) + Check3 (20%) + Check4 (15%) + Check5 (15%)
If Total >= 0.90: ✅ Proceed with implementation
If Total >= 0.70: ⚠️ Present alternatives, ask questions
If Total < 0.70: ❌ STOP - Request more context
Output Format
📋 Confidence Checks:
✅ No duplicate implementations found
✅ Uses existing tech stack
✅ Official documentation verified
✅ Working OSS implementation found
✅ Root cause identified
📊 Confidence: 1.00 (100%)
✅ High confidence - Proceeding to implementation
Implementation Details
The TypeScript implementation is available in confidence.ts for reference, containing:
confidenceCheck(context)- Main assessment function- Detailed check implementations
- Context interface definitions
ROI
Token Savings: Spend 100-200 tokens on confidence check to save 5,000-50,000 tokens on wrong-direction work.
Success Rate: 100% precision and recall in production testing.
Source
git clone https://github.com/Microck/ordinary-claude-skills/blob/main/skills_all/confidence-check/SKILL.mdView on GitHub Overview
Confidence Check is a pre-implementation assessment that must reach ≥90% before any work begins. It verifies duplicates, architecture compliance, official docs verification, OSS references, and root-cause identification to prevent wrong-direction efforts.
How This Skill Works
The skill runs five checks (duplicate search, architecture alignment, docs verification, OSS references, and root-cause clarity) to compute a confidence score from 0.0 to 1.0. If the total score is ≥0.9, you proceed; lower scores trigger warnings or request more context.
When to Use It
- Before starting any new implementation task to ensure readiness.
- When there might be duplicate functionality in the codebase.
- When validating architecture compliance and existing tech patterns.
- When official documentation and API compatibility must be verified.
- When root-cause analysis and actionable debugging paths are needed.
Quick Start
- Step 1: Use Grep and Glob to scan for existing implementations.
- Step 2: Verify architecture alignment against CLAUDE.md and PLANNING.md.
- Step 3: Review official docs via WebFetch and locate OSS examples; assess root cause.
Best Practices
- Search the codebase with Grep for similar functions before coding.
- Cross-check CLAUDE.md and PLANNING.md for alignment.
- Review official docs with WebFetch and corroborate API compatibility.
- Look for working OSS implementations as references with WebSearch.
- Document the root cause findings and keep the confidence score visible.
Example Use Cases
- A new feature module with no duplicates found in the repository.
- Refactoring a component only after confirming architecture compatibility.
- Official docs reviewed and no breaking changes detected.
- An OSS reference demonstrating a working approach before implementation.
- Root-cause analysis completed for an observed bug prior to coding a fix.