Gastown Migration
Scannednpx machina-cli add skill auge2u/lisa-helps-ralph-loops/gastown-migration --openclawGastown Migration Skill
This skill analyzes existing projects and structures them for Gastown — Steve Yegge's multi-agent workspace manager.
Overview
The migration process transforms a project into Gastown-compatible structure:
project/
├── .gt/
│ ├── memory/
│ │ ├── semantic.json # Permanent facts (tech stack, constraints)
│ │ ├── episodic.json # Decisions with TTL (~30 days)
│ │ └── procedural.json # Learned patterns
│ ├── beads/
│ │ └── gt-*.json # Individual work items
│ └── convoys/
│ └── convoy-*.json # Bundled work assignments
└── [existing project files]
Commands
| Command | Purpose |
|---|---|
analyze | Scan project, generate semantic memory |
beads | Extract work items from TODOs, issues, PRDs |
convoy | Bundle related beads into work assignments |
migrate | Full migration (analyze + beads + convoy) |
Phase 1: Analyze (Semantic Memory)
Discovery Procedure
Scan these locations in order:
-
Package files (for tech stack detection)
package.json— Node.js runtime, framework, dependenciesCargo.toml— Rust projectsgo.mod— Go projectsrequirements.txt,pyproject.toml— Python projectsGemfile— Ruby projects
-
Configuration files (for service detection)
.firebaserc,firebase.json— Firebasewrangler.toml— Cloudflare Workersvercel.json— Vercel deploymentdocker-compose.yml,Dockerfile— Containerization*.env.example— Environment variables hint
-
Documentation (for project understanding)
README.md— Project description, setup instructionsdocs/— Architecture docs, ADRs, PRDsCONTRIBUTING.md— Development workflowCHANGELOG.md— Project history
-
Source structure (for codebase understanding)
src/,lib/,app/— Main code directoriestests/,__tests__/,spec/— Test directoriesschemas/,migrations/— Database schemas
Output: semantic.json
{
"$schema": "semantic-memory-v1",
"project": {
"name": "my-app",
"type": "web-application",
"primary_language": "TypeScript",
"description": "A task management app for teams"
},
"tech_stack": {
"runtime": "Node.js 20",
"framework": "Next.js 14",
"database": "Neon PostgreSQL",
"auth": "Firebase Auth",
"deployment": "Vercel",
"styling": "Tailwind CSS",
"testing": "Vitest"
},
"personas": [
{"name": "Team Lead", "needs": ["assign tasks", "track progress"]},
{"name": "Developer", "needs": ["see my tasks", "update status"]}
],
"constraints": [
"Must support offline mode",
"GDPR compliant data handling"
],
"non_goals": [
"Mobile native app (web-only for MVP)",
"Enterprise SSO (future phase)"
],
"evidence": {
"last_scan": "2026-01-27T10:00:00Z",
"files_analyzed": ["package.json", "README.md", "docs/PRD.md"]
}
}
Phase 2: Beads (Work Items)
Extraction Sources
Scan for work items in:
-
TODO comments in source code
// TODO:,# TODO:,/* TODO */// FIXME:,// HACK:,// XXX:
-
GitHub Issues (if
.gitexists)- Open issues via
gh issue list - Labels map to bead type/priority
- Open issues via
-
PRD documents
- User stories with acceptance criteria
- Feature requirements
-
Backlog files
BACKLOG.md,TODO.mddocs/backlog/,docs/roadmap/
-
Existing roadmap outputs
scopecraft/EPICS_AND_STORIES.mdscopecraft/OPEN_QUESTIONS.md
Bead Schema
{
"$schema": "bead-v1",
"id": "gt-abc12",
"title": "Add user authentication",
"type": "feature",
"complexity": "L",
"priority": "high",
"status": "pending",
"dependencies": [],
"acceptance_criteria": [
"User can sign up with email",
"User can sign in with Google OAuth",
"Session persists across page refresh"
],
"evidence": {
"source": "docs/PRD-auth.md",
"line": 42,
"extracted": "2026-01-27T10:00:00Z"
},
"metadata": {
"epic": "User Management",
"labels": ["auth", "security"]
}
}
Bead Types
| Type | Description |
|---|---|
feature | New functionality |
bug | Defect fix |
chore | Maintenance, refactoring |
docs | Documentation |
spike | Research/investigation |
Complexity Estimates
| Size | Description | Typical Duration |
|---|---|---|
XS | Trivial change | < 1 hour |
S | Small task | 1-4 hours |
M | Medium task | 1-2 days |
L | Large task | 3-5 days |
XL | Epic-sized | 1-2 weeks |
ID Generation
Bead IDs follow Gastown convention: gt-<5-char-alphanumeric>
Example: gt-abc12, gt-xyz99, gt-m4n5p
Phase 3: Convoy (Work Bundles)
Convoy Creation Rules
- Size: 3-7 beads per convoy (optimal batch)
- Coherence: Related beads grouped together
- Dependencies: Respect bead dependency order
- Parallelization: Independent convoys can run concurrently
Convoy Schema
{
"$schema": "convoy-v1",
"id": "convoy-001",
"name": "Authentication Sprint",
"description": "Implement core user authentication features",
"beads": ["gt-abc12", "gt-def34", "gt-ghi56"],
"assigned_to": null,
"status": "pending",
"created": "2026-01-27T10:00:00Z",
"metadata": {
"epic": "User Management",
"estimated_days": 5
}
}
Bundling Strategies
- By Epic: Group all beads from same epic
- By Dependency Chain: Group dependent beads sequentially
- By Skill: Group beads requiring similar expertise
- By Size: Combine small beads, isolate large ones
Quality Gates
Analyze Gates
| Gate | Requirement |
|---|---|
semantic_valid | semantic.json is valid JSON |
project_identified | project.name is not null |
tech_stack_detected | At least 2 tech_stack fields populated |
evidence_recorded | files_analyzed has 1+ entries |
Beads Gates
| Gate | Requirement |
|---|---|
beads_extracted | At least 1 bead created |
beads_have_criteria | All beads have acceptance_criteria |
beads_have_evidence | All beads have evidence.source |
beads_valid_ids | All IDs match gt-[a-z0-9]{5} |
Convoy Gates
| Gate | Requirement |
|---|---|
convoy_created | At least 1 convoy created |
convoy_size_valid | All convoys have 3-7 beads |
convoy_beads_exist | All referenced beads exist |
Validation
Run validation with:
# Python validator
python plugins/lisa-loops-memory/hooks/validate_gastown.py
# Check specific phase
python plugins/lisa-loops-memory/hooks/validate_gastown.py --phase analyze
python plugins/lisa-loops-memory/hooks/validate_gastown.py --phase beads
python plugins/lisa-loops-memory/hooks/validate_gastown.py --phase convoy
Integration with Gastown
After migration, Gastown Mayor can:
- Read Memory: Load
.gt/memory/semantic.jsonfor project context - List Beads: Enumerate
.gt/beads/*.jsonfor available work - Assign Convoys: Create/assign convoys to Polecats
- Track Progress: Update bead/convoy status as work completes
Examples
See examples/ directory for sample outputs:
examples/gastown/semantic.jsonexamples/gastown/beads/examples/gastown/convoys/
Source
git clone https://github.com/auge2u/lisa-helps-ralph-loops/blob/main/plugins/lisa-loops-memory/skills/gastown-migration/SKILL.mdView on GitHub Overview
The Gastown Migration skill analyzes an existing project and reorganizes it into Gastown's workspace structure. It creates a .gt folder with semantic memory, beads, and convoys to enable multi-agent collaboration, and provides commands for analyze, beads, convoy, and migrate.
How This Skill Works
The tool scans core project files (package.json, language manifests, docs, and src folders) to populate semantic.json with tech stack, constraints, and personas. It then extracts beads from TODOs, issues, and PRDs, groups related items into convoys, and writes them under project/.gt/. A full migrate runs analyze + beads + convoy to complete the Gastown setup.
When to Use It
- Starting a new project to adopt a Gastown workflow
- Refactoring a legacy repo into a Gastown-structured workspace
- Migrating a monorepo into a unified Gastown project
- Onboarding a team to multi-agent collaboration with Gastown
- Preparing governance and memory for knowledge retention
Quick Start
- Step 1: Run analyze from the project root to build semantic memory
- Step 2: Run beads to extract work items from TODOs, issues, and PRDs
- Step 3: Run migrate to complete the Gastown structure under .gt/
Best Practices
- Run analyze after major changes to refresh semantic memory
- Keep semantic.json, beads, and convoy data under version control
- Standardize bead types, priorities, and naming conventions
- Review convoys before execution to avoid conflicting work items
- Limit and sanitize sensitive data in semantic memory to meet constraints
Example Use Cases
- Migrate a Node.js web app to a Gastown-structured workspace
- Structure a Python data-processing project with semantic memory
- Convert a Rails monolith into beads and convoys for Gastown
- Organize a multi-service microservice repo into a single Gastown project
- Set up offline mode and GDPR-conscious memory in semantic.json