smart-routing
npx machina-cli add skill a5c-ai/babysitter/smart-routing --openclawSmart Routing
Overview
Intelligent task routing using Q-Learning to select optimal execution paths. Simple tasks route to Agent Booster (WASM, <1ms, $0), medium tasks to efficient models, and complex tasks to Opus + multi-agent swarms.
When to Use
- Optimizing cost vs. quality tradeoffs for diverse task types
- When tasks range from simple transforms to complex multi-file changes
- Reducing latency for common code transformations
- Learning from routing history to improve future decisions
Routing Tiers
| Tier | Target | Latency | Cost |
|---|---|---|---|
| Agent Booster | Simple transforms (var-to-const, add-types) | <1ms | $0 |
| Medium | Standard coding tasks | ~500ms | Low |
| Complex | Multi-agent swarm coordination | 2-5s | Higher |
Agent Booster Transforms
var-to-const- Variable declaration modernizationadd-types- TypeScript type annotation insertionadd-error-handling- Try/catch wrapper insertionasync-await- Promise chain to async/await conversionextract-function- Code block extraction to named functionsadd-jsdoc- Documentation generation
Agents Used
agents/optimizer/- Performance and cost optimizationagents/architect/- Complex task decomposition
Tool Use
Invoke via babysitter process: methodologies/ruflo/ruflo-task-routing
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/ruflo/skills/smart-routing/SKILL.mdView on GitHub Overview
Smart Routing uses Q-Learning to select optimal execution paths based on task complexity. Simple tasks go to Agent Booster (WASM, <1ms, $0), medium tasks use efficient models, and complex tasks go to Opus with multi-agent swarms, guided by Mixture-of-Experts model selection.
How This Skill Works
A Q-Learning controller analyzes task features and routing history to decide which tier to route to. The system employs three routing tiers (Agent Booster for simple transforms, Medium for standard coding tasks, Complex for multi-agent coordination) and leverages Agent Booster Transforms and dedicated agents to optimize latency and cost. Invocation is performed via the babysitter process at the path methodologies/ruflo/ruflo-task-routing.
When to Use It
- Optimizing cost vs. quality tradeoffs for diverse task types
- When tasks range from simple transforms to complex multi-file changes
- Reducing latency for common code transformations
- Learning from routing history to improve future decisions
- Balancing latency and compute resources by tiering to Agent Booster, Medium, or Complex paths
Quick Start
- Step 1: Classify the task complexity (simple, medium, or complex) based on transformation scope and files touched
- Step 2: Invoke routing via the babysitter path: methodologies/ruflo/ruflo-task-routing to select Agent Booster, Medium, or Complex
- Step 3: Collect results and feed performance data back into the routing model to improve future decisions
Best Practices
- Map task types to the appropriate routing tier (Agent Booster, Medium, Complex) to minimize latency and cost
- Prioritize simple transforms for the Agent Booster WASM path (<1ms, $0) whenever possible
- Capture routing feedback and latency data to continually train the Q-Learning model
- Monitor tier utilization to avoid bottlenecks in Medium or Complex paths
- Familiarize teams with the supported Agent Booster Transforms to leverage fast-path options
Example Use Cases
- A project automatically routes var-to-const and add-types transformations to Agent Booster for near-zero latency
- Frequent TS type annotations are handled by the Medium tier using efficient models
- Large-scale refactors involving multiple files are coordinated through the Complex tier (Opus + multi-agent swarms)
- CI pipelines route common code transformations to reduce latency and keep up with rapid iterations
- Routing history is analyzed to refine Q-Learning decisions and reduce average task completion time