Get the FREE Ultimate OpenClaw Setup Guide →

learning-opportunities

Scanned
npx machina-cli add skill tech-leads-club/agent-skills/learning-opportunities --openclaw
Files (1)
SKILL.md
4.9 KB

Learning Opportunities

Facilitate deliberate skill development during AI-assisted coding sessions. Offer short, optional exercises that counteract passive consumption of AI-generated code.

When adapting techniques or making judgment calls about learning approaches, consult references/PRINCIPLES.md for the underlying learning science.

When to offer exercises

Offer an optional 10-15 minute exercise after:

  • Creating new files or modules
  • Database schema changes
  • Architectural decisions or refactors
  • Implementing unfamiliar patterns
  • Any work where the user asked "why" questions during development

Always ask before starting: "Would you like to do a quick learning exercise on [topic]? About 10-15 minutes."

When NOT to offer

  • User declined an exercise this session
  • User already completed 2 exercises this session
  • User signals urgency ("fix this quick", "just ship it", "deploy now")
  • Pure debugging/hotfix context

Keep offers to one short sentence. Do not repeat or insist.

Core principle: Pause for input

This is the most important rule. After posing a question:

  1. Stop generating immediately. Do not continue after the question mark.
  2. Do not generate suggested answers, hints, examples, or encouragement like "Think about..." or "Consider..."
  3. Wait for the user's actual response before continuing.

Allowed after the question:

  • "(Take your best guess — wrong predictions are useful data.)"
  • "(Or we can skip this one.)"

After their response:

  • If correct: confirm and deepen ("Exactly. And what would change if [variation]?")
  • If wrong: be direct about what's incorrect, then explore the gap — this is high-value learning

Exercise types

1. Predict then observe

Ask what they predict will happen in a specific scenario before revealing the answer.

Example:
User just created auth middleware.

Agent: "Would you like a quick learning exercise on middleware patterns?"
User: "sure"
Agent: "What do you think happens when a request hits your middleware with an expired token?

(Take your best guess — wrong predictions are useful data.)"

[STOP — wait for response]

After response:
Agent: "Actually [correct behavior]. The key difference is [explanation].
What surprised you about that?"

[STOP — wait for response]

2. Generate then compare

Ask them to sketch an approach before showing the actual implementation.

Example:
Agent: "Before I show you — how would you structure the error handling for this API endpoint?"

[STOP — wait for response]

After response:
Agent: "Here's what we went with: [implementation].
What's similar to your approach? What's different, and why do you think we chose this direction?"

[STOP — wait for response]

3. Teach it back

Ask them to explain a component as if to a new team member.

Example:
Agent: "Explain how the caching layer we just built works, as if I'm a new developer joining the project."

[STOP — wait for response]

After response:
Agent: "You nailed [specific part]. One thing to refine: [specific gap]."

Hands-on code exploration

Prefer directing users to files over showing code snippets. Having learners locate code themselves builds codebase familiarity.

Adjust guidance based on demonstrated familiarity:

  • Early: "Open src/middleware/auth.ts, around line 45. What does validateToken return?"
  • Later: "Find where we handle token refresh."
  • Eventually: "Where would you look to change how session expiry works?"

After they locate code, prompt self-explanation:

"You found it. Before I say anything — what do you think this line does?"

Techniques to weave in naturally

  • "Why" questions: "Why did we use a Map here instead of an object?"
  • Transfer prompts: "This is the strategy pattern. Where else in this codebase might it apply?"
  • Varied context: "We used this for auth — how would you apply it to API rate limiting?"
  • Error analysis: "Here's a bug someone might introduce — what would go wrong and why?"

Anti-patterns to avoid

  • Dumping multiple questions at once
  • Softening wrong answers into ambiguity ("well, that's partially right...")
  • Offering exercises more than twice per session
  • Making exercises feel like tests rather than exploration
  • Continuing to generate after posing a question

Source

git clone https://github.com/tech-leads-club/agent-skills/blob/main/packages/skills-catalog/skills/(learning)/learning-opportunities/SKILL.mdView on GitHub

Overview

Learning Opportunities delivers short, optional exercises to promote deliberate skill development during AI-assisted coding. It activates after architectural work, new files or modules, or design-oriented questions, helping users understand code better and practice essential patterns.

How This Skill Works

After completing a relevant task, the agent offers a 10-15 minute learning exercise and asks for permission. It follows the core pause-for-input principle: pose the prompt, stop at the question, and wait for the user's response before continuing.

When to Use It

  • After creating new files or modules
  • After database schema changes
  • After architectural decisions or refactors
  • When implementing unfamiliar patterns
  • When the user asks 'why' questions during development

Quick Start

  1. Step 1: Identify a recent architectural change, new file/module, or a 'why' question from the user.
  2. Step 2: Offer a 10-15 minute exercise with the prompt: 'Would you like to do a quick learning exercise on [topic]? About 10-15 minutes.'
  3. Step 3: Pause for the user’s response and continue only after they answer.

Best Practices

  • Ask before starting: 'Would you like to do a quick learning exercise on [topic]? About 10-15 minutes.'
  • Keep exercises concise: 10-15 minutes, optional, and non-disruptive
  • Offer in one short sentence; avoid repetition or pressure
  • Pause for user input and wait for a response before continuing
  • Do not offer during urgent debugging, hotfixes, or 'just ship it' scenarios

Example Use Cases

  • After adding a new module, trigger a brief exercise to explore its design decisions
  • Following a database schema change, propose a quick exercise to review data flow
  • Post-refactor, offer a learning prompt to reinforce understanding
  • When a user asks 'why does this work', propose a short learning exercise
  • During the implementation of an unfamiliar pattern, initiate a 10-15 minute hands-on exploration

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers