convo-analysis
npx machina-cli add skill Vibe-Builders/claude-prime/convo-analysis --openclawConversation Analysis
Analyze human-AI conversation flows to identify behavioral patterns, compliance gaps, and improvement opportunities.
Role
You are a conversation analyst. Your job is to EXTRACT and ANALYZE the conversation flow - not to judge, implement, or fix anything.
Context Assessment
Before starting, analyze conversation history:
- Empty conversation? → Report "Nothing to analyze"
- Start analysis from first message to this command trigger
Core Principles
- Balanced analysis - Evaluate BOTH human and AI behavior equally; neither party is presumed at fault
- Chronological preservation - Show conversation as it happened, turn by turn
- Behavioral focus - What happened, not blame assignment
- Contribution assessment - Quantify each party's contribution to any miscommunication
- Aggressive sanitization - Replace all specifics with placeholders
- Rule mapping - Check against orchestrator directives, _apply-all rules, command workflows, Holy Trinity
Analysis Process
1. Extract
Gather all messages from session start to this command:
- User messages (requests, clarifications, approvals)
- AI responses (reasoning, actions taken)
- Tools used
- Skills activated
- Agents spawned
- Commands invoked
2. Analyze
- Classify user messages - Identify type: direct-request, meta-request, mixed-content, clarification, feedback
- Scope determination - Is this analyzing THIS session or a REFERENCED session?
- Behavior mapping - Check both parties against expected patterns
- Contribution scoring - Assign percentages to understand root cause
- Improvement targeting - Identify specific fixes for user, AI, and system
3. Output
- Write report to
docs/session-reports/{YYYYMMDDHHMMSS}-<short-title>.md - Display brief summary to user
Sanitization Rules
Replace with placeholders:
- File paths →
[FILE_1],[FILE_2] - Feature names →
[FEATURE_A],[FEATURE_B] - API endpoints →
[ENDPOINT_X] - Variable/function names →
[CODE_REF] - Business terms →
[DOMAIN_TERM] - Code blocks →
[CODE_BLOCK]
Keep as-is:
- Tool names (Read, Grep, Task, etc.)
- Skill names (/cook, etc.)
- Agent names (the-mechanic, etc.)
- Generic actions (search, read, write, edit)
Analysis Checklist
Output Format
Guardrails
Holy Trinity:
- YAGNI: Only analyze - don't suggest fixes inline
- KISS: Simple extraction, delegate complexity to skill
- DRY: Reuse existing references and templates
Communication:
- Report what happened, not who's "wrong"
- Neutral behavioral observations
- No blame assignment
Constraints:
- NO code snippets in output
- NO business logic exposure
- NO file paths or domain-specific terms
- Report must be shareable without editing
- Aggressive sanitization: replace specifics with placeholders
Common Pitfalls
| Pitfall | How to Avoid |
|---|---|
| Focusing only on AI rule violations | Always analyze user message clarity first |
| Analyzing quoted/pasted content as primary subject | Identify meta-requests and scope correctly |
| Assigning 100% blame to one party | Use contribution percentages based on evidence |
| Missing buried requests in mixed content | Parse each message for multiple intents |
| Skipping rule loading | MUST read all rules BEFORE analysis - see rules-checklist.md |
| Success bias (completed = good) | Check HOW it completed, not just that it completed |
| Surface-level analysis | Check principles (delegation, YAGNI), not just workflow steps |
Focus Area (Optional)
<focus>$ARGUMENTS</focus>
If provided, focus analysis on specific aspect (e.g., "rule compliance", "request clarity", "workflow gates").
Source
git clone https://github.com/Vibe-Builders/claude-prime/blob/main/.claude/skills/convo-analysis/SKILL.mdView on GitHub Overview
Convo-analysis examines human-AI interactions to identify behavioral patterns, compliance gaps, and root causes of miscommunication. It extracts the conversation flow from session start to the trigger, preserves chronology, and replaces sensitive details with placeholders to produce safe, shareable reports.
How This Skill Works
As a conversation analyst, the skill collects all messages from session start to the trigger, classifies each user message and AI reply, and maps contributions to identify patterns. It applies aggressive sanitization to replace specifics with placeholders and generates a structured report plus a brief user-facing summary, stored in a standardized docs location for sharing.
When to Use It
- Debug AI compliance issues in a live or recorded conversation
- Review human request clarity and identify ambiguity in multi-turn sessions
- Identify root causes of human-AI miscommunication and misinterpretation
- Produce sanitized session reports for stakeholders without exposing sensitive data
- Audit alignment with orchestrator directives and rule workflows across sessions
Quick Start
- Step 1: Trigger convo-analysis on the target session to begin extraction
- Step 2: Let it classify messages, map behavior, and compute contribution scores with placeholders
- Step 3: Review the sanitized report and brief summary, then share if needed
Best Practices
- Always analyze from session start to the command trigger to preserve context
- Maintain strict chronological order of messages and responses
- Focus on behavioral observations rather than blame; quantify contributions
- Apply aggressive sanitization to all sensitive details using placeholders
- Use the generated report as input for governance and training, not for direct fixes
Example Use Cases
- Audit of a live chat to debug AI compliance gaps, with all PII replaced by placeholders
- Clarity review in a multi-turn user request where [USER_INPUT] and [AI_RESPONSE] are sanitized
- Root-cause analysis of miscommunication in a failed instruction session, summarized safely
- Escalation-flow review across agents with anonymized data for training purposes
- Cross-session pattern extraction to inform policy updates in [DOMAIN_TERM]-related conversations