Vector-Robot
Scanned@bogorman
npx machina-cli add skill @bogorman/vector-robot --openclawVector Robot Control
Control an Anki Vector robot running wire-pod.
Prerequisites
- Anki Vector robot with escape pod firmware
- wire-pod running (https://github.com/kercre123/wire-pod)
- OpenClaw proxy server for voice input (optional)
Quick Reference
All API calls require &serial=SERIAL parameter. Default: 00501a68.
SERIAL="00501a68"
WIREPOD="http://127.0.0.1:8080"
Speech Output
# Make Vector speak (URL encode the text)
curl -s -X POST "$WIREPOD/api-sdk/assume_behavior_control?priority=high&serial=$SERIAL"
curl -s -X POST "$WIREPOD/api-sdk/say_text?text=Hello%20world&serial=$SERIAL"
curl -s -X POST "$WIREPOD/api-sdk/release_behavior_control?serial=$SERIAL"
Or use the helper script: scripts/vector-say.sh "Hello world"
Camera
# Capture frame from MJPEG stream
timeout 2 curl -s "$WIREPOD/cam-stream?serial=$SERIAL" > /tmp/stream.mjpeg
# Extract JPEG with Python (see scripts/vector-see.sh)
Movement
⚠️ SAFETY: Cliff sensors are DISABLED during behavior control. Be careful with wheel movements!
# Head: speed -2 to 2
curl -s -X POST "$WIREPOD/api-sdk/move_head?speed=2&serial=$SERIAL" # up
curl -s -X POST "$WIREPOD/api-sdk/move_head?speed=-2&serial=$SERIAL" # down
curl -s -X POST "$WIREPOD/api-sdk/move_head?speed=0&serial=$SERIAL" # stop
# Lift: speed -2 to 2
curl -s -X POST "$WIREPOD/api-sdk/move_lift?speed=2&serial=$SERIAL" # up
curl -s -X POST "$WIREPOD/api-sdk/move_lift?speed=-2&serial=$SERIAL" # down
# Wheels: lw/rw -200 to 200 (USE WITH CAUTION)
curl -s -X POST "$WIREPOD/api-sdk/move_wheels?lw=100&rw=100&serial=$SERIAL" # forward
curl -s -X POST "$WIREPOD/api-sdk/move_wheels?lw=-50&rw=50&serial=$SERIAL" # turn left
curl -s -X POST "$WIREPOD/api-sdk/move_wheels?lw=0&rw=0&serial=$SERIAL" # stop
Settings
# Volume: 0-5
curl -s -X POST "$WIREPOD/api-sdk/volume?volume=5&serial=$SERIAL"
# Eye color: 0-6
curl -s -X POST "$WIREPOD/api-sdk/eye_color?color=4&serial=$SERIAL"
# Battery status
curl -s "$WIREPOD/api-sdk/get_battery?serial=$SERIAL"
Actions/Intents
curl -s -X POST "$WIREPOD/api-sdk/cloud_intent?intent=intent_imperative_dance&serial=$SERIAL"
Available intents: intent_imperative_dance, intent_system_sleep, intent_system_charger, intent_imperative_fetchcube, explore_start
Voice Input (OpenClaw Integration)
To receive voice commands from Vector, run the proxy server:
node scripts/proxy-server.js
Configure wire-pod Knowledge Graph (http://127.0.0.1:8080 → Server Settings):
- Provider: Custom
- API Key:
openclaw - Endpoint:
http://localhost:11435/v1 - Model:
openclaw
The proxy writes incoming questions to request.json. Respond by writing to response.json:
{"timestamp": 1234567890000, "answer": "Your response here"}
LaunchAgent (Auto-start on macOS)
Install to ~/Library/LaunchAgents/com.openclaw.vector-proxy.plist for auto-start. See scripts/install-launchagent.sh.
API Reference
See references/api.md for complete endpoint documentation.
Overview
Vector Robot Control lets you operate an Anki Vector using wire-pod. You can speak through Vector, view its camera, move its head, lift, and wheels, change eye colors, and trigger built-in animations. This enables hands-free demos, interactive tasks, and controlled experiments with a physical Vector.
How This Skill Works
The skill communicates with a local wire-pod server over HTTP, using a serial parameter (default 00501a68) to identify Vector. Commands like say_text, move_head, move_lift, move_wheels, and eye_color are posted to the wire-pod API to perform actions. Optional OpenClaw integration provides voice input via a proxy server.
When to Use It
- You want Vector to speak a response or narration for a demo or chatbot.
- You need to move Vector's head, lift, or wheels to interact with objects or people.
- You want to view Vector's camera feed or capture frames for analysis.
- You want to change Vector's eye color or trigger an animation to convey status or mood.
- You are integrating voice input with OpenClaw proxy for hands-free control.
Quick Start
- Step 1: Set SERIAL and WIREPOD (wire-pod server) with SERIAL=00501a68 and WIREPOD=http://127.0.0.1:8080
- Step 2: Test basic actions: curl -s -X POST $WIREPOD/api-sdk/say_text?text=Hello%20world&serial=$SERIAL and curl -s -X POST $WIREPOD/api-sdk/move_head?speed=2&serial=$SERIAL
- Step 3: (Optional) Start the OpenClaw proxy server and configure the endpoint for voice input
Best Practices
- Always include the serial parameter in every API call.
- Be aware cliff sensors are disabled during behavior control; test in a safe space.
- Use the documented speed/range values for head, lift and wheels and avoid sudden large moves.
- Leverage helper scripts (e.g., vector-say.sh, vector-see.sh) for repeatable tasks.
- Check eye_color (0-6) and volume (0-5) before demos; monitor battery status with get_battery.
Example Use Cases
- Vector says: curl -s -X POST $WIREPOD/api-sdk/assume_behavior_control?serial=$SERIAL; curl -s -X POST $WIREPOD/api-sdk/say_text?text=Hello%20world&serial=$SERIAL
- Move Vector: curl -s -X POST $WIREPOD/api-sdk/move_head?speed=2&serial=$SERIAL; curl -s -X POST $WIREPOD/api-sdk/move_wheels?lw=100&rw=100&serial=$SERIAL
- Camera frame: curl -s $WIREPOD/cam-stream?serial=$SERIAL > /tmp/stream.mjpeg
- Eye color: curl -s -X POST $WIREPOD/api-sdk/eye_color?color=4&serial=$SERIAL
- Dance: curl -s -X POST $WIREPOD/api-sdk/cloud_intent?intent=intent_imperative_dance&serial=$SERIAL