modular-skills
npx machina-cli add skill athola/claude-night-market/modular-skills --openclawTable of Contents
Modular Skills Design
Overview
This framework breaks complex skills into focused modules to keep token usage predictable and avoid monolithic files. We use progressive disclosure: starting with essentials and loading deeper technical details via @include or Load: statements only when needed. This approach prevents hitting context limits during long-running tasks.
Modular design keeps file sizes within recommended limits, typically under 150 lines. Shallow dependencies and clear boundaries simplify testing and maintenance. The hub-and-spoke model allows the project to grow without bloating primary skill files, making focused modules easier to verify in isolation and faster to parse.
Core Components
Three tools support modular skill development:
skill-analyzer: Checks complexity and suggests where to split code.token-estimator: Forecasts usage and suggests optimizations.module_validator: Verifies that structure complies with project standards.
Design Principles
We design skills around single responsibility and loose coupling. Each module focuses on one task, minimizing dependencies to keep the architecture cohesive. Clear boundaries and well-defined interfaces prevent changes in one module from breaking others. This follows Anthropic's Agent Skills best practices: provide a high-level overview first, then surface details as needed to maintain context efficiency.
Module Ownership (IMPORTANT)
Deprecated: skills/shared/modules/ directories. This pattern caused orphaned references when shared modules were updated or removed.
Current pattern: Each skill owns its modules at skills/<skill-name>/modules/. When multiple skills need the same content, the primary owner holds the module and others reference it via relative path (e.g., ../skill-authoring/modules/anti-rationalization.md). The validator flags any remaining skills/shared/ directories.
Quick Start
Skill Analysis
Analyze modularity using scripts/analyze.py. You can set a custom threshold for line counts to identify files that need splitting.
python scripts/analyze.py --threshold 100
From Python, use analyze_skill from abstract.skill_tools.
Token Usage Planning
Estimate token consumption to verify your skill stays within budget. Run this from the skill directory:
python scripts/tokens.py
Module Validation
Check for structure and pattern compliance before deployment.
python scripts/abstract_validator.py --scan
Workflow and Tasks
Start by assessing complexity with skill_analyzer.py. If a skill exceeds 150 lines, break it into focused modules following the patterns in ../../docs/examples/modular-skills/. Use token_estimator.py to check efficiency and abstract_validator.py to verify the final structure. This iterative process maintains module maintainability and token efficiency.
Quality Checks
Identify modules needing attention by checking line counts and missing Table of Contents. Any module over 100 lines requires a TOC after the frontmatter to aid navigation.
# Find modules exceeding 100 lines
find modules -name "*.md" -exec wc -l {} + | awk '$1 > 100'
Standards Compliance
Our standards prioritize concrete examples and a consistent voice. Always provide actual commands in Quick Start sections instead of abstract descriptions. Use third-person perspective (e.g., "the project", "developers") rather than "you" or "your". Each code example should be followed by a validation command. For discoverability, descriptions must include at least five specific trigger phrases.
TOC Template
## Table of Contents
- [Section Name](#section-name)
- [Examples](#examples)
- [Troubleshooting](#troubleshooting)
Resources
Shared Modules: Cross-Skill Patterns
Standard patterns for triggers, enforcement language, and anti-rationalization:
- Trigger Patterns: See trigger-patterns.md
- Enforcement Language: See enforcement-language.md
- Anti-Rationalization: See anti-rationalization.md
Skill-Specific Modules
Detailed guides for implementation and maintenance:
- Enforcement Patterns: See
modules/enforcement-patterns.md - Core Workflow: See
modules/core-workflow.md - Implementation Patterns: See
modules/implementation-patterns.md - Migration Guide: See
modules/antipatterns-and-migration.md - Design Philosophy: See
modules/design-philosophy.md - Troubleshooting: See
modules/troubleshooting.md - Optimization Techniques: See
modules/optimization-techniques.md- reducing large skill file sizes through externalization, consolidation, and progressive loading
Tools and Examples
- Tools:
skill_analyzer.py,token_estimator.py, andabstract_validator.pyin../../scripts/. - Examples: See
../../docs/examples/modular-skills/for reference implementations.
Source
git clone https://github.com/athola/claude-night-market/blob/master/plugins/abstract/skills/modular-skills/SKILL.mdView on GitHub Overview
Modular Skills Design breaks complex skills into focused modules to keep token usage predictable and avoid monolithic files. It uses progressive disclosure, loading deeper technical details via @include or Load: statements only when needed, and maintains primary files under ~150 lines. The hub-and-spoke pattern enables isolated testing and faster parsing as the project grows.
How This Skill Works
Skills are decomposed into modules owned by the skill at skills/<skill-name>/modules/, with deeper details loaded on demand using include or Load statements. The workflow is guided by three tools—skill-analyzer, token-estimator, and module_validator—to enforce complexity controls, token budgets, and structural conformance; shared content is discouraged and validators flag remaining skills/shared references.
When to Use It
- When creating a new skill expected to exceed ~150 lines and you need to avoid monoliths
- When planning a new architecture or refactoring to a hub-and-spoke structure
- When aiming to keep token usage predictable and within context limits
- When assessing modular boundaries and ensuring single-responsibility modules
- When validating structure and ownership using the modular pattern
Quick Start
- Step 1: python scripts/analyze.py --threshold 100
- Step 2: python scripts/tokens.py
- Step 3: python scripts/abstract_validator.py --scan
Best Practices
- Enforce single responsibility for each module to minimize dependencies
- Keep primary skill files under ~150 lines; split as needed
- Load deeper details only when required using Include/Load patterns
- Adopt the hub-and-spoke ownership model and use relative paths for shared content
- Run skill-analyzer, token-estimator, and module_validator during development
Example Use Cases
- Split a long skill into skills/<skill-name>/modules to reduce size
- Reference common content via relative paths like ../skill-authoring/modules/...
- Analyze modularity with scripts/analyze.py to identify candidates for splitting
- Estimate token usage with scripts/tokens.py to ensure budgets
- Validate structure with scripts/abstract_validator.py --scan before deployment