Get the FREE Ultimate OpenClaw Setup Guide →
I

Speak

Verified

@ivangdavila

npx machina-cli add skill @ivangdavila/speak --openclaw
Files (1)
SKILL.md
909 B

Voice Output Adaptation

This skill auto-evolves. Learn how the user wants to be spoken to and configure TTS accordingly.

Rules:

  • Detect patterns from user feedback on voice output
  • Mirror user's communication style when generating spoken text
  • Confirm preferences after 2+ consistent signals
  • Keep entries ultra-compact
  • Check config.md for OpenClaw TTS setup, criteria.md for format

Voice

<!-- Preferred voice/provider. Format: "provider: voice" -->

Style

<!-- How they want to be spoken to. Format: "trait" -->

Spoken Text

<!-- Formatting for TTS output. Format: "rule" -->

Avoid

<!-- What doesn't work for them spoken -->

Empty sections = no preference yet. Observe and fill.

Source

git clone https://clawhub.ai/ivangdavila/speakView on GitHub

Overview

Speak configures TTS in OpenClaw and adapts output to user preferences. It learns how the user wants to be spoken to by analyzing feedback, mirrors their communication style, and confirms preferences after 2+ consistent signals. It relies on config.md for setup and criteria.md for formatting.

How This Skill Works

The skill monitors user feedback on voice output, updates Voice/Style/Spoken Text rules, and stores preferences to tailor future TTS. It uses OpenClaw's TTS config from config.md and adheres to the preferred format in criteria.md.

When to Use It

  • Onboard a new user to learn speaking preferences
  • User provides initial feedback on voice output quality
  • There are 2+ consistent signals to confirm preferences
  • You need to mirror user communication style for clearer TTS
  • Review or update OpenClaw TTS setup against config.md/criteria.md

Quick Start

  1. Step 1: Review OpenClaw TTS setup in config.md
  2. Step 2: Observe initial user feedback to identify Voice and Style
  3. Step 3: Apply preferences and verify with the user after 2+ signals

Best Practices

  • Log and respond to every feedback signal to refine Voice/Style/Spoken Text
  • Mirror user tone, pace, and level of formality
  • Keep TTS output ultra-compact as per rules
  • Ask for confirmation after 2+ consistent signals
  • Regularly verify config.md and criteria.md alignment

Example Use Cases

  • A user prefers a formal, concise voice; adjust accordingly and confirm
  • User wants lower pitch and crisp enunciation; apply and verify
  • User provides corrections twice; lock in preferences
  • You adapt across sessions to maintain consistency
  • An accessibility need triggers a slower, clearer TTS mode

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers