Get the FREE Ultimate OpenClaw Setup Guide →

reachy-mini-sdk

Scanned
npx machina-cli add skill jjmartres/reachy-mini-sdk-skill/skill --openclaw
Files (1)
SKILL.md
5.9 KB

Reachy Mini SDK

Programming guide for Reachy Mini - an open-source desktop humanoid robot with 6-DOF head, expressive antennas, and AI integration.

Hardware

  • Head: 6-DOF Stewart platform (X,Y,Z + roll,pitch,yaw)
  • Antennas: 2 servos
  • Body: 360° yaw rotation
  • Sensors: Camera, microphone, IMU (Wireless only)

Daemon: FastAPI on port 8000 (REST + WebSocket)

Quick Start

Installation

See references/installation.md for complete setup (uv/pip, platform-specific configs, permissions).

Basic Connection

from reachy_mini import ReachyMini

# Local
with ReachyMini() as mini:
    pass

# Remote (Wireless)
with ReachyMini(localhost_only=False) as mini:
    pass

Movement

See references/movement_control.md for complete guide (450+ lines with all patterns).

goto_target (Smooth Interpolation)

from reachy_mini.utils import create_head_pose
import numpy as np

mini.goto_target(
    head=create_head_pose(z=10, roll=15, degrees=True, mm=True),
    antennas=np.deg2rad([45, 45]),
    body_yaw=np.deg2rad(30),
    duration=2.0,
    method="minjerk"  # linear, ease, cartoon
)

set_target (Direct Control)

For high-frequency control (>30Hz):

mini.set_target(
    head=create_head_pose(z=5, mm=True),
    antennas=[0.5, -0.5]
)

Coordinates

  • Head: Position in meters, orientation in radians
  • Antennas: Radians (±1.5)
  • Body: Radians (full 360°)

Sensors

See references/sensors.md for camera, audio, IMU details.

# Camera (BGR numpy array)
frame = mini.media.get_frame()

# Audio (16kHz stereo)
samples = mini.media.get_audio_sample()
mini.media.push_audio_sample(samples)  # Non-blocking

# IMU (Wireless only)
if hasattr(mini, 'imu'):
    data = mini.imu.get_data()

Motion Recording

mini.start_recording()
# Move robot
motion = mini.stop_recording()
motion.save("demo.pkl")

# Replay
motion.play()

REST API

See references/daemon_api.md for all 25+ endpoints. See references/openapi_usage.md for client generation.

Direct HTTP Control

import requests

# Move via API
requests.post("http://localhost:8000/api/goto", json={
    "head_pose": {"x": 0, "y": 0, "z": 0.01, "roll": 0, "pitch": 0, "yaw": 0},
    "duration": 2.0,
    "interpolation": "minjerk"
})

# Get state
state = requests.get("http://localhost:8000/api/state/full-state").json()

Generate Clients

See references/openapi_schema.json for OpenAPI v3.1.0 spec.

# Python
openapi-generator-cli generate -i openapi_schema.json -g python -o client/

# TypeScript
openapi-typescript openapi_schema.json -o types.ts

# Go, Rust, Java, etc. (50+ languages supported)

App Management

# Install app
requests.post("http://localhost:8000/api/apps/install", json={
    "name": "hand_tracker",
    "source": "hf_space",
    "space_id": "pollen-robotics/hand_tracker_v2"
})

# Start app
requests.post("http://localhost:8000/api/apps/start-app/hand_tracker")

Motor Modes

# Compliant (manual movement)
requests.post("http://localhost:8000/api/motors/set-mode", 
              json={"mode": "disabled"})

# Active control
requests.post("http://localhost:8000/api/motors/set-mode", 
              json={"mode": "enabled"})

# Gravity compensation
requests.post("http://localhost:8000/api/motors/set-mode", 
              json={"mode": "gravity_compensation"})

AI Integration

See references/ai_integration.md for LLM patterns, vision models, multimodal apps, and HuggingFace deployment.

Example: Object Detection

from transformers import pipeline

detector = pipeline("object-detection")
frame = mini.media.get_frame()
results = detector(frame)

# React to detections
for obj in results:
    if obj['label'] == 'person':
        mini.goto_target(antennas=np.deg2rad([45, 45]), duration=0.5)

Common Patterns

Greeting Sequence

def greet():
    mini.goto_target(head=create_head_pose(z=5, mm=True), duration=0.5)
    mini.goto_target(antennas=np.deg2rad([45, 45]), duration=0.5)
    for _ in range(2):
        mini.goto_target(head=create_head_pose(pitch=-10, degrees=True), duration=0.3)
        mini.goto_target(head=create_head_pose(pitch=10, degrees=True), duration=0.3)

Scanning Motion

for angle in [-60, -30, 0, 30, 60]:
    mini.goto_target(
        body_yaw=np.deg2rad(angle),
        head=create_head_pose(z=5, mm=True),
        duration=1.0
    )

Reference Files

  • installation.md - Setup for Wireless/Lite/Simulation
  • movement_control.md - Complete movement guide (450+ lines)
  • sensors.md - Camera, microphone, IMU access
  • ai_integration.md - AI models, LLMs, apps, deployment
  • daemon_api.md - REST API reference (500+ lines, 25+ endpoints)
  • openapi_schema.json - OpenAPI v3.1.0 spec for client generation
  • openapi_usage.md - Using OpenAPI for automation
  • api_quick_reference.md - Quick reference card

Platform Notes

  • Wireless: Raspberry Pi, WiFi, includes IMU, use localhost_only=False from PC
  • Lite: USB connection, no IMU, use localhost_only=True
  • Simulation: MuJoCo-based, no hardware needed

Safety

  • SDK enforces limits automatically
  • Test in simulation first
  • Use appropriate durations (0.5-2.0s typically)
  • Always use context managers (with ReachyMini())

Version

SDK v1.2.6, OpenAPI v3.1.0

Source: https://github.com/pollen-robotics/reachy_mini/tree/1.2.6

Source

git clone https://github.com/jjmartres/reachy-mini-sdk-skill/blob/main/skill/SKILL.mdView on GitHub

Overview

Reachy Mini SDK provides a Python interface and a REST daemon to control Reachy Mini’s head, antennas, and body, access sensors, and record motions. It supports app management, coordinate systems, interpolation options, and OpenAPI client generation for rapid AI integrations, including Hugging Face deployment.

How This Skill Works

The SDK connects to the local or remote Reachy Mini via Python APIs (e.g., ReachyMini class) and the FastAPI daemon on port 8000. It exposes high-level motion commands such as goto_target (smooth interpolation) and set_target (high-frequency control), plus sensor access and motion recording; clients can be generated from the OpenAPI spec.

When to Use It

  • Prototype AI-powered interactions by scripting head/antenna/body motions while streaming camera or IMU data.
  • Develop and test robot apps locally, then deploy to Hugging Face Spaces or hubs.
  • Record, save, and replay motion sequences for demos or teaching robotics.
  • Control Reachy Mini remotely over the daemon REST API for automation or remote debugging.
  • Install, start, and manage apps (e.g., hand tracking) via the App Management endpoints.

Quick Start

  1. Step 1: Install dependencies and set up platform configs as documented in references/installation.md.
  2. Step 2: Connect to Reachy Mini locally or over wireless using ReachyMini().
  3. Step 3: Try a basic movement, e.g., mini.goto_target(head=creat_head_pose(z=0.01), duration=2.0, method='minjerk'), or call the REST endpoint /api/goto with a simple payload.

Best Practices

  • Prefer goto_target for smooth paths using interpolation (minjerk, linear, ease, cartoon) to achieve natural motions.
  • Follow the coordinate conventions: Head position in meters with orientation in radians; Antennas in radians; Body yaw in radians.
  • Leverage the daemon REST API on port 8000 for automation and, when needed, WebSocket streaming for live updates.
  • Generate language-specific API clients from the OpenAPI schema before full integration (Python, TypeScript, etc.).
  • Validate motor states first (enabled/disabled/gravity_compensation) before running complex motions to avoid unexpected behavior.

Example Use Cases

  • Create a coordinated motion by calling mini.goto_target with head, antennas, and body_yaw, using create_head_pose and a 2-second duration.
  • Install a Hugging Face AI app via the App Management endpoints and start it to run in the Reachy Mini environment.
  • Record a motion with mini.start_recording(), stop and save as demo.pkl, then replay with motion.play().
  • Fetch a live camera frame with mini.media.get_frame() and feed it into an AI vision pipeline.
  • Generate a Python or TypeScript client from openapi_schema.json using OpenAPI Generator and integrate into an external app.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers