ai-product-teardown
Scannednpx machina-cli add skill aroyburman-codes/pm-skills/ai-product-teardown --openclawAI Product Teardown Skill
Perform a structured, opinionated teardown of any AI product — analyzing the product decisions, UX, technical architecture, business model, and competitive positioning from a PM lens.
When to Use
- User asks "Tear down [AI product]" or "Analyze [AI product]"
- User wants to understand the product thinking behind an AI feature
- User wants to build product intuition about AI products
- User says
/ai-product-teardownfollowed by a product name - Great for: ChatGPT, Claude, Gemini, Perplexity, Copilot, Midjourney, Cursor, v0, NotebookLM, etc.
Framework: AI Product Teardown (7 Sections)
Section 1: Product Overview
- What it is: One-sentence description
- Company: Who built it, their mission, and strategic context
- Launch date & trajectory: When launched, key milestones, current scale
- Target users: Primary and secondary audiences
- Business model: How it makes money (or plans to)
Section 2: Core Value Proposition
- Job to be Done: What fundamental job does this product do for users?
- 10x moment: What's the moment where users think "this is magic"?
- Switching cost: What would it take to switch away?
- Network effects: Does it get better with more users? How?
Section 3: UX & Product Decisions
Walk through the key product decisions and evaluate each:
- Onboarding flow: How does a new user go from zero to value?
- Core interaction model: Chat? Canvas? Structured output? Multi-modal?
- Information architecture: How is functionality organized?
- Personalization: How does it adapt to different users?
- Error handling: What happens when the AI is wrong?
For each decision, evaluate:
- What they got RIGHT and why
- What they got WRONG or could improve
- What trade-off they're making (and whether you'd make the same one)
Section 4: Technical Architecture (PM Lens)
Analyze the technical choices from a product perspective:
- Model strategy: Which model(s)? Why that capability level?
- Latency vs. quality trade-off: Where do they sit on the spectrum?
- Context & memory: How does it handle conversation history?
- Safety & guardrails: What's their content policy approach?
- Tool use / plugins / integrations: How extensible is it?
- Pricing architecture: How do technical costs map to pricing?
Section 5: Growth & Distribution
- Acquisition channels: How do users find this? (organic, viral, paid, partnerships)
- Activation: What gets users to the "aha moment"?
- Retention loops: What brings users back?
- Monetization: Free → paid conversion strategy
- Viral mechanics: Does usage naturally create awareness?
Section 6: Competitive Positioning
- Direct competitors: Who else does this job?
- Positioning map: Plot on 2x2 (e.g., capability vs. safety, consumer vs. enterprise)
- Sustainable moats: What's defensible? (data, distribution, brand, model quality, ecosystem)
- Vulnerability: Where could a competitor win?
Section 7: PM Recommendations
If you were the PM, what would you do next?
- Top 3 features to build (with reasoning and expected impact)
- Top 1 thing to kill or change (what's not working)
- Strategic bet: One big swing that could transform the product
- Metrics to watch: What would you track weekly?
Output Format
Write as an opinionated product review — structured but with a clear point of view. Use screenshots/descriptions of specific UI elements where relevant. Aim for ~2000 words. Be specific and cite real features.
Research-First Workflow
- Research — Search for latest product updates, user reviews, competitor announcements, company blog posts, and usage data. Do 5-10 searches.
- Cite sources — Include
[linked source](url)inline for factual claims. - Display the complete teardown.
What Good Looks Like
- Shows you've done homework on the product landscape
- Demonstrates structured product thinking on real products
- Reveals your product taste and judgment
- Provides concrete examples to reference in product discussions
- Builds intuition about AI product patterns across the industry
Source
git clone https://github.com/aroyburman-codes/pm-skills/blob/main/skills/ai-product-teardown/SKILL.mdView on GitHub Overview
The AI Product Teardown skill performs a structured, opinionated review of AI products, analyzing product decisions, UX patterns, technical architecture, business model, and competitive positioning from a PM lens. It follows a seven-section framework to surface concrete insights and actionable next steps.
How This Skill Works
It uses the AI Product Teardown framework with seven sections: Section 1 Product Overview, Section 2 Core Value Proposition, Section 3 UX & Product Decisions, Section 4 Technical Architecture, Section 5 Growth & Distribution, Section 6 Competitive Positioning, and Section 7 PM Recommendations. For each section it pulls apart onboarding, interaction models, memory and context handling, safety, monetization, and other signals, then captures what is done well, what could be improved, and the trade-offs.
When to Use It
- When a user asks to tear down a specific AI product or analyze its design decisions
- When you want to understand the product thinking behind an AI feature
- When you’re building product intuition for AI products and strategies
- When you run /ai-product-teardown <product> to generate a structured review
- When evaluating a product like ChatGPT, Claude, Gemini, Perplexity, Copilot, or similar
Quick Start
- Step 1: Trigger the skill with a target product name, e.g., /ai-product-teardown ChatGPT
- Step 2: Review the seven sections and extract concrete signals, strengths, and gaps
- Step 3: Translate findings into PM-ready actions, roadmap bets, and metrics
Best Practices
- Base analyses on fresh product data, release notes, and user feedback to reflect current reality
- Cite concrete UI elements, flows, and feature names from the target product
- Evaluate each product decision with what is right, what is wrong, and the trade-offs involved
- Anchor recommendations to user needs, business model, and strategic goals
- Apply the seven-section framework consistently to maintain clarity and comparability
Example Use Cases
- Teardown of ChatGPT focusing on onboarding flow, chat UX, and value hooks
- Claude teardown exploring memory, privacy controls, and feature set trade-offs
- Gemini teardown examining multimodal strategy and pricing dynamics
- Copilot teardown analyzing developer workflow integration and ecosystem fit
- Perplexity teardown evaluating search UX, answer quality, and monetization