Get the FREE Ultimate OpenClaw Setup Guide →

cortex

Scanned
npx machina-cli add skill Sh3rd3n/megazord/cortex --openclaw
Files (1)
SKILL.md
8.6 KB

CORTEX: Adaptive Reasoning Engine

CORTEX classifies every task through Cynefin domains (Clear/Complicated/Complex/Chaotic) and applies appropriate mental models from untools.co. It is a protocol reference consumed by the executor during task execution when cortex_enabled: true.

CORTEX is NOT a manually invocable skill. It activates automatically as part of the executor flow before each task when the config flag is enabled.

Classification Heuristic Matrix

Before every task, assess the domain using these concrete heuristics. Each signal is evaluated independently.

SignalClearComplicatedComplexChaotic
LOC estimate<5050-300>300 or new architectureN/A (broken state)
Files affected1-23-56+ or new module structureN/A
New APIs/interfaces01-2 internalExternal API or new public interfaceN/A
Module scopeSame module2-3 modules4+ modules or cross-systemN/A
Pattern familiarityWell-known, done before2+ valid patterns to choose fromNo clear pattern, novel territoryN/A
Existing test coverageTests exist, minor modificationTests need modificationNo existing test patternsTests failing on unrelated code
Side effectsNoneLocalized, predictableDistributed, hard to traceCascading failures
External dependenciesNone newInternal packagesExternal services, 3rd-party APIsExternal service down/corrupt

Classification Algorithm

  1. Evaluate each signal independently
  2. Task domain = HIGHEST domain any signal triggers
  3. Chaotic ONLY triggered by crisis signals (build broken, tests failing on unrelated code, external service down, data corruption, security incident) -- never by quantitative thresholds alone
  4. When signals conflict, explain the override: CORTEX: Complicated -- 30 LOC but new external API integration elevates from Clear

Classification Output Format

  • Clear: CORTEX -> Clear -- {2+ concrete signals} (NO visible output beyond this internal classification -- user sees nothing)
  • Complicated: CORTEX -> Complicated | {signals} | Applying: {framework list}
  • Complex: CORTEX -> Complex | {signals} | Applying: {framework list}
  • Chaotic: CORTEX: Chaotic -- {description}. Requesting user input. (STOP execution)

For Complicated+ tasks, the classification line is the <summary> of a collapsible <details> block.

Post-Classification Protocol

  • Clear: Execute directly. No framework output. No CORTEX output visible to user.
  • Complicated: Output challenge block inside <details>, then execute.
  • Complex: Output challenge block + complex analysis block inside <details>, select approach, proceed.
  • Chaotic: STOP and request user input.

Challenge Block Template (Complicated+ Tasks)

For every Complicated or Complex task, produce this block:

<details>
<summary>CORTEX -> {domain} | {signals} | Applying: {framework list}</summary>

<challenge domain="{complicated|complex}">
INVERSION (Pre-mortem):
  1. "This fails when {specific scenario 1}"
  2. "This fails when {specific scenario 2}"
  3. "This fails when {specific scenario 3}"

ASSUMPTIONS (Ladder of Inference):
  For each key assumption:
  - Data: {observable fact}
  - Interpretation: {what we read into the data}
  - Assumption: {inference we're making}
  - Status: {verified (checked in code/docs) | unverified (guessed)}

SECOND-ORDER (Consequence trace):
  If we do X:
  -> First-order: {immediate effect}
  -> Second-order: {what follows from that}
  -> Third-order: {what follows from that} (if relevant)

COUNTER: {strongest argument against this approach}
VERDICT: proceed | modify | reject
</challenge>

</details>

Challenge Block Rules

  • INVERSION: Exactly 3 pre-mortem failure scenarios, specific not vague. "This fails when..." not "This might fail"
  • ASSUMPTIONS: Trace chain from data through interpretation to assumption. Mark verified/unverified. Catches assumption jumps where conclusions skip evidence rungs.
  • SECOND-ORDER: Follow consequence chain at least 2 steps. "If we do X, then Y happens, and then Z follows." Surfaces cascading effects invisible at first glance.
  • COUNTER: Genuine attack, not a softball
  • VERDICT: Honest assessment. If modify: state changes. If reject: explain, propose alternative.

Complex Analysis Block Template (Complex Tasks Only)

When a task is classified as Complex, produce this block AFTER the challenge block and BEFORE execution:

<details>
<summary>CORTEX Complex Analysis</summary>

<complex-analysis>
FIRST-PRINCIPLES:
  Irreducible truths about this problem:
  1. {fundamental truth}
  2. {fundamental truth}
  3. {fundamental truth}

  Decomposition method: {Five Whys | Socratic Questioning}
  {Show the chain of questions that reached these fundamentals}

ABSTRACTION-LADDERING:
  WHY (move up): {What's the real problem behind the stated problem?}
  REFRAMED: {The problem restated at a higher abstraction level}
  HOW (move down): {What specific approaches address the reframed problem?}

ALTERNATIVES:
  1. {approach} -- tradeoffs: {pro/con}
  2. {approach} -- tradeoffs: {pro/con}
  3. {approach} -- tradeoffs: {pro/con}

SELECTED: {N} -- {rationale with evidence}
</complex-analysis>

</details>

Complex Analysis Rules

  • FIRST-PRINCIPLES: Decompose to fundamentals -- what are the irreducible truths about this problem? Strip away conventions and assumptions to find bedrock facts. Use Five Whys or Socratic Questioning to reach them, and show the chain of questions.
  • ABSTRACTION-LADDERING: Move up (WHY) to find the real problem behind the stated problem, then reframe it, then move down (HOW) to find specific approaches that address the reframed problem. This prevents solving the wrong problem.
  • ALTERNATIVES: Generate at least 3 approaches with explicit tradeoffs (pro/con for each).
  • SELECTED: Document selection rationale with evidence, not just preference.

Iceberg Analysis Template (Recurring-Area Tasks)

Trigger condition: Task touches a module/area that was flagged as problematic in a prior SUMMARY.md (mentioned in "Deviations from Plan", "Issues Encountered", or "Deferred Issues" sections -- NOT merely listed as modified). This distinguishes recurring problems from normal development iteration.

Skip Iceberg Analysis on fresh tasks with no prior history.

<details>
<summary>CORTEX Iceberg Analysis: {area}</summary>

<iceberg area="{module/area name}">
EVENT: {What happened -- the surface symptom}
PATTERN: {Has this happened before? Evidence from SUMMARY.md, git history, or prior tasks}
STRUCTURE: {What system dynamics cause this pattern? Dependencies, coupling, tech debt, missing abstractions}
MENTAL-MODEL: {What assumption about this area keeps producing the pattern?}
LEVERAGE: {Where to intervene for a lasting fix, not just symptom treatment}
</iceberg>

</details>

Iceberg Model Rules

  • EVENT: Describe the surface symptom that triggered analysis
  • PATTERN: Provide evidence of recurrence (prior SUMMARY references, git log patterns, task history)
  • STRUCTURE: Identify systemic causes -- dependencies, coupling, tech debt, missing abstractions
  • MENTAL-MODEL: Surface the assumption that perpetuates the pattern
  • LEVERAGE: Identify the highest-leverage intervention point for a lasting fix, not just symptom treatment

Anti-Patterns

  • Vague classification: Always cite at least 2 concrete signals. "This seems complicated" is never acceptable.
  • Framework theater: Frameworks must produce insights, not boilerplate. If a framework section reads like a template with blanks filled in, it is not adding value.
  • Over-classifying: Do not inflate complexity to justify elaborate output. Simple tasks should stay Clear even if the executor could produce impressive-looking analysis.
  • Under-classifying: Do not downplay complexity to avoid work. If multiple signals point to Complicated, do not classify as Clear.
  • CORTEX noise on Simple/Clear tasks: Absolutely NO visible output. The user decision is explicit -- Simple/Clear tasks get no CORTEX output shown to the user.

Complementary Frameworks

When they fit naturally, CORTEX may complement with:

  • 5 Whys -- During First Principles decomposition as a method for reaching irreducible truths
  • MECE -- During Issue Tree construction to ensure mutually exclusive, collectively exhaustive decomposition

No other additions -- keep the set focused.

Source

git clone https://github.com/Sh3rd3n/megazord/blob/master/skills/cortex/SKILL.mdView on GitHub

Overview

CORTEX classifies every task through Cynefin domains (Clear, Complicated, Complex, Chaotic) and applies mental models from untools.co. It acts as a protocol reference consumed by the executor and automatically activates before each task when cortex_enabled is true. It is designed as an internal guidance layer, not a manually invocable skill.

How This Skill Works

Before each task, CORTEX assesses signals such as LOC estimate, files affected, new APIs/interfaces, module scope, pattern familiarity, existing test coverage, side effects, and external dependencies to assign the highest domain. It then applies the corresponding mental models and, if needed, outputs a challenge block or stops for user input (in Chaotic cases). The output guides the executor's next steps without exposing internal classification details to the user.

When to Use It

  • New public API integration across multiple modules with substantial LOC (>300).
  • External dependencies are changing and multiple valid patterns might apply.
  • Signals conflict about the best approach while test coverage is incomplete.
  • Cross-system changes affecting 4+ modules with potential side effects.
  • Crisis conditions such as a broken build, unrelated test failures, or an external service outage.

Quick Start

  1. Step 1: Ensure cortex_enabled is true in the executor configuration.
  2. Step 2: Let Cortex classify before the task; review detected domain and guidance.
  3. Step 3: If domain is Clear, proceed; if Complicated/Complex review the challenge block; if Chaotic stop and request input.

Best Practices

  • Gather and maintain up-to-date signal data before starting a task (LOC, files, APIs, scope, tests, dependencies).
  • Follow the domain output: Clear for direct proceed, Complicated/Complex require the challenge block, Chaotic halts and requests user input.
  • Provide clear override explanations when domain classification redirects task flow.
  • Maintain test coverage visibility and verify stability of external dependencies before proceeding.
  • Review potential cascading effects and cross-system impacts before coding.

Example Use Cases

  • Adding a new public API across 3 modules with changes in multiple interfaces.
  • Large feature rewrite (>300 LOC) with 1+ new interfaces across modules.
  • Coordinating changes across services with several dependencies (cross-system).
  • Crisis: build breaks or an external service becomes unavailable during task execution.
  • Ambiguous signals where patterns exist but tests are sparse or failing.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers