tiling-tree
npx machina-cli add skill oaustegard/claude-skills/tiling-tree --openclawTiling Tree
Implements the MIT Synthetic Neurobiology tiling tree method: recursively partition a problem space into non-overlapping, collectively exhaustive subsets until reaching actionable leaf ideas, then evaluate those leaves.
Core Concept
The method's power comes from MECE splits forcing exploration of unfamiliar territory. A split is only valid when you can state precisely what each branch excludes — if you can't, the criterion is too vague and branches will overlap.
Key insight from the source method: always look for the "third option" that falls outside an obvious binary split. The bloodstream-secretion approach to neural recording only emerged because "wired vs. wireless" was defined precisely enough to reveal it covered neither case.
When to Use
- "What are all the ways we could solve X?"
- "Apply the tiling tree method to Y"
- "Exhaustively map the solution space for Z"
- Any request for MECE decomposition of a problem domain
Setup
Requires orchestrating-agents skill to be installed. Load it first:
import sys
sys.path.insert(0, '/mnt/skills/user/orchestrating-agents/scripts')
sys.path.insert(0, '/home/claude') # for muninn_utils shim if needed
import claude_client as cc
# Apply API key shim if credentials aren't in ANTHROPIC_API_KEY env var:
# from muninn_utils.orchestrating_agents_shim import patch_orchestrating_agents
# patch_orchestrating_agents(cc)
Running the Tiling Tree
# Basic usage
python3 /mnt/skills/user/tiling-tree/scripts/tiling_tree.py "Your problem here"
# With options
python3 /mnt/skills/user/tiling-tree/scripts/tiling_tree.py \
"How can we record neural activity?" \
--depth 3 \
--criteria "impact,novelty,feasibility" \
--output /mnt/user-data/outputs/neural_recording_tree.md
Parameters
| Parameter | Default | Notes |
|---|---|---|
problem | required | Natural language problem statement |
--depth | 2 | Max recursion depth. Depth 2 ≈ 16 leaves, depth 3 ≈ 64 leaves |
--criteria | impact,novelty,feasibility | Comma-separated evaluation dimensions |
--output | tiling_tree.md | Output markdown path |
Depth guidance: Start with depth 2 to validate the problem framing. Increase to 3 only when the domain genuinely warrants it — depth 3 generates ~64 leaves and ~40 API calls.
Architecture
- Orchestrator (this script): builds tree skeleton, dispatches parallel split jobs per level, merges results, detects gaps
- Branch agents (
invoke_parallel): each receives one node to split, returns MECE branches with explicit exclusion statements - Evaluator (
invoke_claude): single agent scores all leaves for cross-leaf consistency
Parallel splitting happens level-by-level (not node-by-node), so a depth-2 tree makes only 2 API round-trips for the splitting phase regardless of branching factor.
Output
A markdown file containing:
- Full tree diagram with split criteria and evaluation scores at leaves
- Ranked leaf table sorted by overall score
JSON Parsing Note
Branch agents return JSON. Claude frequently wraps JSON in markdown fences despite instructions. The script handles this with _parse_json(). When issue #312 is resolved and parse_json_response() is added to orchestrating-agents, update the import accordingly.
Interpreting Results
Good trees have:
- Split criteria that are definitions, not questions ("energy source type" not "is it renewable?")
- Leaf exclusions that confirm non-overlap
- A "surprising" branch — something you wouldn't have thought of without the tree
If all leaves feel obvious, the split criteria were too coarse. Redo the tree with more precise definitions at the branch level where it went flat.
Source
git clone https://github.com/oaustegard/claude-skills/blob/main/tiling-tree/SKILL.mdView on GitHub Overview
tiling-tree partitions a problem into MECE (Mutually Exclusive, Collectively Exhaustive) subsets recursively using parallel subagents, then evaluates leaf ideas against specified criteria. This approach helps reveal all viable approaches beyond binary splits and surfaces the strongest options for action.
How This Skill Works
An orchestrator builds a tree skeleton and dispatches parallel split jobs per level. Branch agents generate MECE branches with explicit exclusion statements, ensuring non-overlapping subsets, while an evaluator scores leaves for cross-leaf consistency. Depth control limits recursion and the final output is a markdown file with the full tree and a ranked leaf table.
When to Use It
- What are all the ways we could solve X?
- Apply the tiling tree method to Y
- Exhaustively map the solution space for Z
- MECE decomposition of a problem domain
- Tile the solution space to reveal third options beyond obvious binary splits
Quick Start
- Step 1: Ensure orchestrating-agents is installed and loaded.
- Step 2: Run the tiling-tree script with your natural-language problem, e.g., "Your problem here".
- Step 3: Review the Markdown output with the full tree, splits, and leaf scores.
Best Practices
- Define precise MECE criteria for every split so branches are non-overlapping.
- Start with depth 2 to validate problem framing; increase to depth 3 only if the domain truly warrants it.
- Require explicit exclusion statements for each branch to ensure MECE validity.
- Run parallel splitting level-by-level to minimize API round-trips.
- Use the evaluator's scores to check cross-leaf consistency and prune weak leaves.
Example Use Cases
- Decompose a product design problem into MECE design options and rank by feasibility and impact.
- Exhaustively map approaches for recording neural activity (wired, wireless, and intermediate methods) and compare criteria.
- MECE breakdown of a marketing strategy across channels, audiences, and messaging.
- Explore software architecture patterns for a new system, scoring scalability, maintainability, and cost.
- Break down regulatory compliance tasks into non-overlapping categories and prioritize actions.