modular-code
npx machina-cli add skill parcadei/Continuous-Claude-v3/modular-code --openclawFiles (1)
SKILL.md
2.9 KB
Modular Code Organization
Write modular Python code with files sized for maintainability and AI-assisted development.
File Size Guidelines
| Lines | Status | Action |
|---|---|---|
| 150-500 | Optimal | Sweet spot for AI code editors and human comprehension |
| 500-1000 | Large | Look for natural split points |
| 1000-2000 | Too large | Refactor into focused modules |
| 2000+ | Critical | Must split - causes tooling issues and cognitive overload |
When to Split
Split when ANY of these apply:
- File exceeds 500 lines
- Multiple unrelated concerns in same file
- Scroll fatigue finding functions
- Tests for the file are hard to organize
- AI tools truncate or miss context
How to Split
Natural Split Points
- By domain concept:
auth.py→auth/login.py,auth/tokens.py,auth/permissions.py - By abstraction layer: Separate interface from implementation
- By data type: Group operations on related data structures
- By I/O boundary: Isolate database, API, file operations
Package Structure
feature/
├── __init__.py # Keep minimal, just exports
├── core.py # Main logic (under 500 lines)
├── models.py # Data structures
├── handlers.py # I/O and side effects
└── utils.py # Pure helper functions
DO
- Use meaningful module names (
data_storage.pynotutils2.py) - Keep
__init__.pyfiles minimal or empty - Group related functions together
- Isolate pure functions from side effects
- Use snake_case for module names
DON'T
- Split files arbitrarily by line count alone
- Create single-function modules
- Over-modularize into "package hell"
- Use dots or special characters in module names
- Hide dependencies with "magic" imports
Refactoring Large Files
When splitting an existing large file:
- Identify clusters: Find groups of related functions
- Extract incrementally: Move one cluster at a time
- Update imports: Fix all import statements
- Run tests: Verify nothing broke after each move
- Document: Update any references to old locations
Current Codebase Candidates
Files over 2000 lines that need attention:
- Math compute modules (scipy, mpmath, numpy) - domain-specific, may be acceptable
- patterns.py - consider splitting by pattern type
- memory_backfill.py - consider splitting by operation type
Sources
Source
git clone https://github.com/parcadei/Continuous-Claude-v3/blob/main/.claude/skills/modular-code/SKILL.mdView on GitHub Overview
Guidelines to write modular Python code with maintainable file sizes and AI-assisted development. It covers optimal file sizes (150-500 lines), split strategies (by domain, abstraction, data type, and IO), plus practical DO/DON'Ts, refactoring steps, and real-world examples.
How This Skill Works
You assess current files, identify clusters of related functions, and move them into focused modules while updating imports and tests. It emphasizes minimal __init__.py, snake_case names, and isolating pure functions from side effects.
When to Use It
- A file exceeds 500 lines
- Multiple unrelated concerns in the same file
- Tests for the file are hard to organize
- AI tools truncate or miss context
- You want to split by domain concept or I/O boundary (e.g., auth/login, tokens, permissions; isolate database, API, and file operations)
Quick Start
- Step 1: Assess file size and concerns to determine if a split is warranted
- Step 2: Identify natural split points (domain concepts, abstraction layers, data types, IO boundaries)
- Step 3: Move clusters to new modules, fix imports, adjust tests, and iterate
Best Practices
- Use meaningful module names (e.g., data_storage.py instead of utils2.py)
- Keep __init__.py files minimal or empty
- Group related functions together
- Isolate pure functions from side effects
- Use snake_case for module names
Example Use Cases
- Splitting auth.py into auth/login.py, auth/tokens.py, auth/permissions.py
- Separate interface from implementation (abstraction layer)
- Group operations by related data structures (models.py and related modules)
- Isolate I/O boundaries by separating database, API, and file operations
- Refactor large files by identifying clusters, extracting them incrementally, updating imports, running tests after each move, and documenting references
Frequently Asked Questions
Add this skill to your agents