behavioral-pm
Scannednpx machina-cli add skill aroyburman-codes/pm-skills/behavioral-pm --openclawBehavioral PM Skill
Apply a structured framework to PM behavioral questions targeting AI product roles.
When to Use
- User asks "Tell me about a time when..."
- User asks about conflict, failure, leadership, influence, ambiguity
- User asks "Why this company?" or "Why PM?" or "Why AI?"
- User says
/behavioral-pmfollowed by a question - Any behavioral, situational, or "tell me about yourself" question
Context
- Tuned for: AI product roles at frontier AI companies
- What matters: Intellectual humility, comfort with ambiguity, collaborative leadership, and genuine passion for AI's impact on the world.
- Key difference from big tech: AI companies care less about "driving results at scale" and more about "navigating uncertainty with good judgment" and "working effectively with researchers."
Values by AI Company Archetype
The Capability-Focused Lab
- Bias toward action and ambition
- Move fast, be bold, push the frontier of what's possible
- Comfort with rapid pivots and high-stakes decisions
- Collaborative with researchers
The Safety-Focused Lab
- Safety-first mindset, intellectual rigor
- Careful, principled, thoughtful approach
- Willingness to slow down when safety demands it
- Strong opinions loosely held
The Research-First Lab
- Scientific rigor, research excellence
- Solve fundamental problems, then apply them broadly
- Bridging research and product
- Long-term thinking over short-term wins
Framework: Enhanced STAR
Structure (Proportions Matter)
- Situation (10%): Set the scene concisely. Company, role, stakes.
- Task (10%): Your specific responsibility. What was YOUR job here?
- Action (60%): The meat. What YOU specifically did. Decisions, trade-offs, influence tactics.
- Result (15%): Quantifiable outcomes. Business impact. What changed.
- + Reflection (5%): What you learned. What you'd do differently. How it shaped your PM philosophy.
The Reflection Step
After every STAR answer, add one of:
- Growth signal: "If I faced this again, I'd..."
- Pattern recognition: "This taught me a general principle about..."
- Company connection: "This is why I'm drawn to [company] — because..."
Common Behavioral Categories
1. Leadership & Influence (No Authority)
- How you aligned cross-functional teams
- Influencing engineers/researchers who disagreed
- Driving decisions when you weren't the decision-maker
- In AI orgs: Working with PhD researchers who have deep domain expertise
2. Conflict & Difficult Stakeholders
- Navigating disagreements with senior leaders
- Managing competing priorities across teams
- Saying no to important people
- In AI orgs: Balancing safety concerns vs. shipping pressure
3. Failure & Learning
- A time something went wrong and how you recovered
- Making a bad product decision and what you learned
- A project that got killed or pivoted
- In AI orgs: Intellectual humility and learning velocity matter most
4. Ambiguity & Strategy
- Making decisions with incomplete information
- Defining a product direction in a new space
- Navigating rapidly changing technical landscape
- In AI orgs: The field changes weekly — staying calibrated matters
5. Technical Collaboration
- Working closely with ML engineers or researchers
- Translating technical constraints into product decisions
- Building trust with deeply technical teams
- In AI orgs: PMs must earn credibility with researchers
6. Impact & Execution
- Shipping something that moved a key metric significantly
- Scaling a product from 0→1 or 1→100
- Making trade-offs between speed and quality
- In AI orgs: Operating at startup speed with enterprise stakes
Anti-Patterns to Avoid
- Too generic: "I communicated clearly and it worked out" — be SPECIFIC
- Hero narrative: "I single-handedly saved the project" — show collaboration
- No numbers: Always quantify results (users, revenue, latency, accuracy)
- No vulnerability: Especially at safety-focused labs — show intellectual humility
- Recency bias: Have stories from different roles/contexts ready
- No "why AI": Every answer should subtly reinforce why you belong at an AI company
Reusable Story Themes
Strong behavioral answers draw from a bank of 6-8 real experiences that map to multiple categories:
| Story Theme | Maps To |
|---|---|
| Navigating conflict with senior stakeholder | Leadership, Conflict, Influence |
| Shipping under extreme ambiguity | Ambiguity, Execution, Strategy |
| Technical deep-dive that changed direction | Technical Collaboration, Learning |
| Product failure and recovery | Failure, Resilience, Growth |
| Cross-functional alignment on hard trade-off | Leadership, Strategy, Execution |
| Going deep on AI/ML to earn researcher trust | Technical, Why AI, Collaboration |
Output Format
Structure as a polished narrative. The enhanced STAR format should feel natural, not mechanical. Aim for ~400-500 words. Include the reflection/growth signal at the end.
Research-First Workflow
Before generating the answer:
- Research — Search for the specific company's leadership principles, recent blog posts about culture, and interview tips from current/former employees.
- Tailor — Map the story to the specific company's values.
- Display the complete enhanced STAR answer.
What Good Looks Like
- Story is specific with real details (names/roles can be anonymized)
- Action section is 60%+ of the answer
- Results are quantified
- Shows self-awareness and growth
- Connects naturally to why this company/role
- Demonstrates the specific leadership quality being tested
- Shows comfort working with deeply technical people
Source
git clone https://github.com/aroyburman-codes/pm-skills/blob/main/skills/behavioral-pm/SKILL.mdView on GitHub Overview
A structured behavioral PM framework for AI product roles, built around the Enhanced STAR method to showcase leadership, conflict resolution, and stakeholder management. It emphasizes intellectual humility, comfort with ambiguity, and collaboration with researchers, reflecting the unique needs of frontier AI companies. The framework helps you craft focused, measurable stories that translate to AI context and safer, smarter decision-making.
How This Skill Works
Answer behavioral prompts using the Enhanced STAR framework with a precise 10/10/60/15/5% breakdown for Situation, Task, Action, Result, and Reflection. After each STAR response, add a Reflection element (Growth signal, Pattern recognition, or Company connection). Tailor examples to AI orgs—highlight collaboration with researchers, handling uncertainty, and balancing safety with shipping pressures.
When to Use It
- User asks "Tell me about a time when..."
- User asks about conflict, failure, leadership, influence, ambiguity
- User asks "Why this company?" or "Why PM?" or "Why AI?"
- User says "/behavioral-pm" followed by a question
- Any behavioral, situational, or "tell me about yourself" question
Quick Start
- Step 1: When faced with a behavioral prompt, identify it as an opportunity to apply Enhanced STAR.
- Step 2: Answer using Situation, Task, Action, Result, then add a concise Reflection (Growth, Pattern, or Company connection).
- Step 3: Adapt stories to AI contexts, emphasizing collaboration with researchers and navigating uncertainty.
Best Practices
- Structure every answer as Situation, Task, Action, Result, Reflection with the 10/10/60/15/5% split.
- Use the Reflection step to include Growth signal, Pattern recognition, or Company connection.
- Anchor stories in AI realities: collaboration with researchers, navigating safety vs. shipping constraints.
- Prepare a balanced set of stories across Leadership, Conflict, Failure, Ambiguity, Technical Collaboration, and Impact & Execution.
- Keep examples concrete with measurable outcomes and avoid generic fluff.
Example Use Cases
- Led a cross-functional AI feature with researchers, aligned stakeholders, and reduced backlog through clearer trade-offs.
- Calibrated product strategy in an ambiguous AI space, resolving competing priorities and delivering an MVP on schedule.
- Resolved a safety-vs-shipping conflict with senior leadership by implementing guardrails, enabling a safer release.
- Handled a model drift incident, updated evaluation processes, and improved reliability for a production deployment.
- Partnered with ML engineers to translate technical constraints into user-focused product decisions, earning credibility with researchers.