continuous-learning
npx machina-cli add skill a5c-ai/babysitter/continuous-learning --openclawContinuous Learning
Overview
Continuous learning pipeline adapted from the Everything Claude Code methodology. Automatically extracts patterns from development sessions, evaluates them with confidence scoring, and converts high-quality patterns into reusable skills.
Learning Pipeline
1. Pattern Extraction
- Analyze code changes and implementation approaches
- Identify recurring patterns and conventions
- Extract architectural decisions with rationale
- Capture error resolution strategies
- Record tool usage patterns
- Assign initial confidence scores (0-100)
2. Pattern Evaluation
- Score generalizability (0-100): cross-project applicability
- Score reliability (0-100): validation frequency
- Score impact (0-100): outcome improvement
- Composite: generalizability * 0.3 + reliability * 0.4 + impact * 0.3
- Filter below confidence threshold (default: 75)
- Merge similar patterns
3. Skill Creation
- Convert high-confidence patterns to SKILL.md format
- Write clear instructions with phases
- Include when-to-use and when-not-to-use sections
- Add usage examples and agent references
- Follow kebab-case naming convention
4. Organization
- Categorize: language-specific, domain, business, meta
- Resolve naming conflicts
- Update indexes and manifests
- Create dependency graphs
5. Version and Export
- Assign semantic versions by maturity
- Create portable export bundles
- Include usage examples and test cases
- Generate import instructions
Strategic Compaction
- Analyze context token usage
- Identify low-value context for compression
- Archive completed phases to memory files
- Calculate token savings per suggestion
When to Use
- End of development sessions
- After significant code reviews
- After debugging sessions
- Periodically during long sessions
Agents Used
continuous-learning(custom agent for this skill)context-engineering(compaction analysis)
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/everything-claude-code/skills/continuous-learning/SKILL.mdView on GitHub Overview
Continuous learning automates pattern extraction from development sessions, evaluates them with confidence scoring, and converts high-quality patterns into reusable SKILL.md skills. It then organizes, versions, and exports portable bundles for cross-project reuse, following the Everything Claude Code methodology.
How This Skill Works
The pipeline analyzes code changes to extract recurring patterns and decisions, then scores them for generalizability, reliability, and impact. A composite score (generalizability * 0.3 + reliability * 0.4 + impact * 0.3) filters out weak patterns (threshold 75) and merges duplicates. High-confidence patterns become SKILL.md-format skills, which are organized, versioned, and exported as portable bundles, with strategic compaction to save tokens.
When to Use It
- End of development sessions
- After significant code reviews
- After debugging sessions
- Periodically during long sessions
- During major refactors
Quick Start
- Step 1: Run Pattern Extraction on a development session and assign initial confidence scores
- Step 2: Evaluate patterns, compute the composite score, merge duplicates, and filter below 75
- Step 3: Convert high-confidence patterns into SKILL.md, add usage examples, and export a portable bundle
Best Practices
- Capture context for each pattern (scope, rationale, tools used)
- Compute and review the composite confidence before merging similar patterns
- Write SKILL.md with phases, when-to-use/when-not-to-use, and examples
- Categorize patterns by language/domain and maintain indexes/manifests
- Version patterns semantically and generate portable export bundles
Example Use Cases
- Convert a recurring error-resolution approach into a reusable skill
- Extract a cross-project API integration pattern and standardize it as a skill
- Document architecture decisions and rationale as reusable SKILL.md content
- Create portable export bundles for a multi-repo environment
- Apply context-engineering to compress context tokens during long sessions