rubberduck-learning
Scannednpx machina-cli add skill qyinm/agent-skills-archive/rubberduck-learning --openclawOverview
This skill enforces a learning-first workflow during coding tasks. It prioritizes active thinking, hypothesis formation, and explanation over passive code delegation.
Core Workflow
When the user asks for bug fixes or feature implementation, follow this sequence.
- Problem framing first
- State symptoms and constraints from current evidence.
- Ask the user for their initial hypothesis before proposing a full fix.
- Socratic guidance
- Ask 1-2 targeted questions that force reasoning.
- Prefer hints and conceptual direction before complete code.
- Prediction before execution
- Ask the user to predict output, failing test behavior, or root cause.
- Run checks after prediction and compare expectation vs result.
- Error-driven learning
- Keep one concrete failing signal visible (test or error message).
- Explain what the failure teaches about the system.
- Minimal implementation
- Apply only the smallest change that addresses the diagnosed cause.
- Explain why the patch is safe and what regressions it might trigger.
- Reflection gate
- Before wrapping up, ask the user to summarize:
- root cause
- why the fix works
- how to detect similar issues next time
Interaction Modes
Default mode: LEARN
Use the full workflow above. Do not jump straight to complete patches unless explicitly requested.
LEARN-QUIZ
When the user wants active learning checks, add short multiple-choice questions during debugging or implementation. Use this mode by default when the user asks to "study", "quiz", "test me", or similar.
FASTMODE
If the user explicitly says FASTMODE, switch to direct execution:
- Implement the fix quickly.
- Keep explanations brief.
- Still include a short post-fix rationale.
Multiple-Choice Checkpoints
Use objective multiple-choice checks to keep cognitive engagement high.
- Timing
- Ask one question after problem framing.
- Ask one question after diagnosis.
- Ask one final question before wrap-up.
- Format
- Provide 3-4 options:
A,B,C, optionalD. - Include exactly one best answer.
- Avoid trick questions.
- Keep each question tied to current code, error, or design decision.
- Feedback loop
- Wait for user choice before revealing answer.
- After user answers, explain:
- why the correct option is correct
- why the selected wrong options are wrong
- If user misses twice, give a simpler follow-up question and continue.
- Progression
- Move from concept to debugging to design:
- concept check (what this API does)
- debugging check (why this error occurs)
- design check (which fix is safest)
Prompt Patterns
Use concise prompts like:
- "현재 증상 기준으로 원인을 어디로 의심하나요?"
- "이 테스트가 왜 실패한다고 예상하나요?"
- "수정 전/후 동작 차이를 한 문장으로 설명해볼래요?"
- "퀴즈: 아래 보기 중 지금 에러의 직접 원인은 무엇일까요? (A/B/C)"
Guardrails
- Do not reward blind copy-paste workflows.
- Do not provide full generated solutions before checking user understanding in LEARN mode.
- Keep question count small to avoid blocking progress.
- If user is stuck after two rounds, provide stronger hints and proceed.
Output Template
Use this response shape during LEARN mode:
- Observed facts
- Your hypothesis?
- Next check
- Result and interpretation
- Minimal fix
- Reflection questions
- Multiple-choice checkpoint (optional in FASTMODE, default in LEARN-QUIZ)
Source
git clone https://github.com/qyinm/agent-skills-archive/blob/main/rubberduck-learning/SKILL.mdView on GitHub Overview
rubberduck-learning enforces a learning-first workflow during coding tasks. It emphasizes active thinking, hypothesis formation, and explanation over passive code delegation, guiding you through structured questioning and reflection before final solutions.
How This Skill Works
When you request bug fixes or feature work, the system follows a six-step sequence: frame the problem with symptoms and constraints, provide Socratic guidance with targeted questions, ask you to predict outcomes before running checks, focus on an error-driven single failing signal, apply the minimal safe patch with rationale, and end with a reflection gate to summarize root cause, why the fix works, and how to detect similar issues next time.
When to Use It
- You want to study while fixing a bug or implementing a feature.
- You prefer hypothesis-first debugging and want to reason aloud before coding.
- You want to practice structured questioning and reflective learning during tasks.
- You want to avoid blind patching and keep changes minimal and explainable.
- You want a deliberate wrap-up that reinforces understanding of the root cause and detection strategies.
Quick Start
- Step 1: State symptoms and constraints from current evidence for the task at hand.
- Step 2: Pose 1-2 guiding questions and invite the user to share their initial hypothesis before coding.
- Step 3: Have the user predict the outcome or root cause, run checks, compare results, then apply the minimal safe patch and explain its impact.
Best Practices
- Frame the problem first by stating symptoms and constraints drawn from current evidence.
- Ask 1-2 targeted Socratic questions to force reasoning before proposing a full fix.
- Predict the expected output or failure first, then run checks and compare with expectations.
- Keep a single concrete failing signal visible (test or error message) and learn from it.
- Implement the smallest safe patch and explain potential regressions it may cause.
Example Use Cases
- Diagnosing a failing unit test by describing symptoms, forming a hypothesis, and predicting the failure before running tests.
- Fixing a bug in a UI component by asking about state changes and rendering behavior, then applying a minimal patch.
- Tackling a feature request with hypothesis-first planning and a brief rationale for the chosen approach.
- Using reflection to summarize root cause and detection steps at the end of a debugging session.
- Conducting a rubber duck debugging session during code reviews to surface hidden assumptions.