Get the FREE Ultimate OpenClaw Setup Guide →

Gastown Migration

Scanned
npx machina-cli add skill auge2u/lisa-helps-ralph-loops/gastown-migration --openclaw
Files (1)
SKILL.md
7.4 KB

Gastown Migration Skill

This skill analyzes existing projects and structures them for Gastown — Steve Yegge's multi-agent workspace manager.

Overview

The migration process transforms a project into Gastown-compatible structure:

project/
├── .gt/
│   ├── memory/
│   │   ├── semantic.json      # Permanent facts (tech stack, constraints)
│   │   ├── episodic.json      # Decisions with TTL (~30 days)
│   │   └── procedural.json    # Learned patterns
│   ├── beads/
│   │   └── gt-*.json          # Individual work items
│   └── convoys/
│       └── convoy-*.json      # Bundled work assignments
└── [existing project files]

Commands

CommandPurpose
analyzeScan project, generate semantic memory
beadsExtract work items from TODOs, issues, PRDs
convoyBundle related beads into work assignments
migrateFull migration (analyze + beads + convoy)

Phase 1: Analyze (Semantic Memory)

Discovery Procedure

Scan these locations in order:

  1. Package files (for tech stack detection)

    • package.json — Node.js runtime, framework, dependencies
    • Cargo.toml — Rust projects
    • go.mod — Go projects
    • requirements.txt, pyproject.toml — Python projects
    • Gemfile — Ruby projects
  2. Configuration files (for service detection)

    • .firebaserc, firebase.json — Firebase
    • wrangler.toml — Cloudflare Workers
    • vercel.json — Vercel deployment
    • docker-compose.yml, Dockerfile — Containerization
    • *.env.example — Environment variables hint
  3. Documentation (for project understanding)

    • README.md — Project description, setup instructions
    • docs/ — Architecture docs, ADRs, PRDs
    • CONTRIBUTING.md — Development workflow
    • CHANGELOG.md — Project history
  4. Source structure (for codebase understanding)

    • src/, lib/, app/ — Main code directories
    • tests/, __tests__/, spec/ — Test directories
    • schemas/, migrations/ — Database schemas

Output: semantic.json

{
  "$schema": "semantic-memory-v1",
  "project": {
    "name": "my-app",
    "type": "web-application",
    "primary_language": "TypeScript",
    "description": "A task management app for teams"
  },
  "tech_stack": {
    "runtime": "Node.js 20",
    "framework": "Next.js 14",
    "database": "Neon PostgreSQL",
    "auth": "Firebase Auth",
    "deployment": "Vercel",
    "styling": "Tailwind CSS",
    "testing": "Vitest"
  },
  "personas": [
    {"name": "Team Lead", "needs": ["assign tasks", "track progress"]},
    {"name": "Developer", "needs": ["see my tasks", "update status"]}
  ],
  "constraints": [
    "Must support offline mode",
    "GDPR compliant data handling"
  ],
  "non_goals": [
    "Mobile native app (web-only for MVP)",
    "Enterprise SSO (future phase)"
  ],
  "evidence": {
    "last_scan": "2026-01-27T10:00:00Z",
    "files_analyzed": ["package.json", "README.md", "docs/PRD.md"]
  }
}

Phase 2: Beads (Work Items)

Extraction Sources

Scan for work items in:

  1. TODO comments in source code

    • // TODO:, # TODO:, /* TODO */
    • // FIXME:, // HACK:, // XXX:
  2. GitHub Issues (if .git exists)

    • Open issues via gh issue list
    • Labels map to bead type/priority
  3. PRD documents

    • User stories with acceptance criteria
    • Feature requirements
  4. Backlog files

    • BACKLOG.md, TODO.md
    • docs/backlog/, docs/roadmap/
  5. Existing roadmap outputs

    • scopecraft/EPICS_AND_STORIES.md
    • scopecraft/OPEN_QUESTIONS.md

Bead Schema

{
  "$schema": "bead-v1",
  "id": "gt-abc12",
  "title": "Add user authentication",
  "type": "feature",
  "complexity": "L",
  "priority": "high",
  "status": "pending",
  "dependencies": [],
  "acceptance_criteria": [
    "User can sign up with email",
    "User can sign in with Google OAuth",
    "Session persists across page refresh"
  ],
  "evidence": {
    "source": "docs/PRD-auth.md",
    "line": 42,
    "extracted": "2026-01-27T10:00:00Z"
  },
  "metadata": {
    "epic": "User Management",
    "labels": ["auth", "security"]
  }
}

Bead Types

TypeDescription
featureNew functionality
bugDefect fix
choreMaintenance, refactoring
docsDocumentation
spikeResearch/investigation

Complexity Estimates

SizeDescriptionTypical Duration
XSTrivial change< 1 hour
SSmall task1-4 hours
MMedium task1-2 days
LLarge task3-5 days
XLEpic-sized1-2 weeks

ID Generation

Bead IDs follow Gastown convention: gt-<5-char-alphanumeric>

Example: gt-abc12, gt-xyz99, gt-m4n5p

Phase 3: Convoy (Work Bundles)

Convoy Creation Rules

  1. Size: 3-7 beads per convoy (optimal batch)
  2. Coherence: Related beads grouped together
  3. Dependencies: Respect bead dependency order
  4. Parallelization: Independent convoys can run concurrently

Convoy Schema

{
  "$schema": "convoy-v1",
  "id": "convoy-001",
  "name": "Authentication Sprint",
  "description": "Implement core user authentication features",
  "beads": ["gt-abc12", "gt-def34", "gt-ghi56"],
  "assigned_to": null,
  "status": "pending",
  "created": "2026-01-27T10:00:00Z",
  "metadata": {
    "epic": "User Management",
    "estimated_days": 5
  }
}

Bundling Strategies

  1. By Epic: Group all beads from same epic
  2. By Dependency Chain: Group dependent beads sequentially
  3. By Skill: Group beads requiring similar expertise
  4. By Size: Combine small beads, isolate large ones

Quality Gates

Analyze Gates

GateRequirement
semantic_validsemantic.json is valid JSON
project_identifiedproject.name is not null
tech_stack_detectedAt least 2 tech_stack fields populated
evidence_recordedfiles_analyzed has 1+ entries

Beads Gates

GateRequirement
beads_extractedAt least 1 bead created
beads_have_criteriaAll beads have acceptance_criteria
beads_have_evidenceAll beads have evidence.source
beads_valid_idsAll IDs match gt-[a-z0-9]{5}

Convoy Gates

GateRequirement
convoy_createdAt least 1 convoy created
convoy_size_validAll convoys have 3-7 beads
convoy_beads_existAll referenced beads exist

Validation

Run validation with:

# Python validator
python plugins/lisa-loops-memory/hooks/validate_gastown.py

# Check specific phase
python plugins/lisa-loops-memory/hooks/validate_gastown.py --phase analyze
python plugins/lisa-loops-memory/hooks/validate_gastown.py --phase beads
python plugins/lisa-loops-memory/hooks/validate_gastown.py --phase convoy

Integration with Gastown

After migration, Gastown Mayor can:

  1. Read Memory: Load .gt/memory/semantic.json for project context
  2. List Beads: Enumerate .gt/beads/*.json for available work
  3. Assign Convoys: Create/assign convoys to Polecats
  4. Track Progress: Update bead/convoy status as work completes

Examples

See examples/ directory for sample outputs:

  • examples/gastown/semantic.json
  • examples/gastown/beads/
  • examples/gastown/convoys/

Source

git clone https://github.com/auge2u/lisa-helps-ralph-loops/blob/main/plugins/lisa-loops-memory/skills/gastown-migration/SKILL.mdView on GitHub

Overview

The Gastown Migration skill analyzes an existing project and reorganizes it into Gastown's workspace structure. It creates a .gt folder with semantic memory, beads, and convoys to enable multi-agent collaboration, and provides commands for analyze, beads, convoy, and migrate.

How This Skill Works

The tool scans core project files (package.json, language manifests, docs, and src folders) to populate semantic.json with tech stack, constraints, and personas. It then extracts beads from TODOs, issues, and PRDs, groups related items into convoys, and writes them under project/.gt/. A full migrate runs analyze + beads + convoy to complete the Gastown setup.

When to Use It

  • Starting a new project to adopt a Gastown workflow
  • Refactoring a legacy repo into a Gastown-structured workspace
  • Migrating a monorepo into a unified Gastown project
  • Onboarding a team to multi-agent collaboration with Gastown
  • Preparing governance and memory for knowledge retention

Quick Start

  1. Step 1: Run analyze from the project root to build semantic memory
  2. Step 2: Run beads to extract work items from TODOs, issues, and PRDs
  3. Step 3: Run migrate to complete the Gastown structure under .gt/

Best Practices

  • Run analyze after major changes to refresh semantic memory
  • Keep semantic.json, beads, and convoy data under version control
  • Standardize bead types, priorities, and naming conventions
  • Review convoys before execution to avoid conflicting work items
  • Limit and sanitize sensitive data in semantic memory to meet constraints

Example Use Cases

  • Migrate a Node.js web app to a Gastown-structured workspace
  • Structure a Python data-processing project with semantic memory
  • Convert a Rails monolith into beads and convoys for Gastown
  • Organize a multi-service microservice repo into a single Gastown project
  • Set up offline mode and GDPR-conscious memory in semantic.json

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers