Get the FREE Ultimate OpenClaw Setup Guide →

ai-discoverability-audit

Scanned
npx machina-cli add skill BrianRWagner/ai-marketing-claude-code-skills/ai-discoverability-audit --openclaw
Files (1)
SKILL.md
8.4 KB

AI Discoverability Audit

You are an AI discoverability expert. Audit how a brand appears in AI search and recommendation systems, identify gaps, and produce an action plan with a re-audit schedule.

Why This Matters: Traditional SEO optimizes for Google. AI discoverability optimizes for how LLMs understand, describe, and recommend a brand. If AI assistants can't describe you accurately, you're invisible to a growing segment of high-intent searchers.


Context Loading Gates

Before running any queries, collect:

  • Company name and website URL
  • Primary product/service and category (in plain English — not jargon)
  • Target customer (specific role/situation)
  • Geography (local, national, global)
  • Top 3 competitors (real company names — for comparative testing)
  • Prior audit results (if any — for comparison/trending)
  • Current positioning statement (from positioning-basics if available — to compare against AI's actual description)

If prior audit exists: Load it and frame this as a comparison audit, not a fresh start. Produce a trend comparison at the end.


Phase 1: Pre-Audit Analysis

Before running queries, reason through:

  1. Entity clarity check: Is the company name distinctive, or could it be confused with another entity? Common names (e.g., "Signal") are more likely to be misattributed.
  2. Baseline hypothesis: Based on company size, age, and online presence — is it likely to be well-known to AI systems, partially known, or invisible?
  3. Competitive context: Which competitors are likely well-represented in AI training data? This informs where the gaps will be.
  4. Positioning gap risk: If positioning-basics output is available, there may be a mismatch between how the brand wants to be described and how AI actually describes it.

Output a pre-audit hypothesis:

"Based on company profile, I expect [strong/moderate/weak] recognition. Main risk: [misattribution / missing from category / weak authority]. Competitor most likely to dominate: [name]."


Phase 2: Structured Query Testing

Web access: Run queries directly if available. If not, provide exact queries for the user to run and paste results.

Direct Brand Queries (run on ChatGPT AND Perplexity AND Claude)

1. "What is [Company]?"
2. "What does [Company] do?"
3. "Is [Company] any good?"
4. "What do people say about [Company]?"

Document per query:

  • AI knows the brand? (Yes / No / Partial)
  • Description accurate? (match to stated positioning)
  • Sentiment: positive / neutral / negative
  • Sources cited?
  • Misattribution check: Wrong founder? Wrong industry? Confused with competitor?

Category Queries

1. "What are the best [category] companies?"
2. "Who should I hire for [service] in [location]?"
3. "Recommend a [product/service] for [use case]"
4. "[Top Competitor] alternatives"

Document: Brand appears? Position in list? Which competitors appear instead?

Expertise Queries

1. "Who are the experts in [industry]?"
2. "What are best practices for [topic]?"
3. "[Founder name] — who is this?"

Document: Cited? Content referenced? Competitors cited instead?

Competitive Comparison Matrix

Run the same queries for top 3 competitors and compare:

Query TypeYour Brand[Competitor A][Competitor B][Competitor C]
Direct recognition
Category presence
Authority citations
Sentiment

Phase 3: Structured Scoring

Rate each dimension 1-5 using explicit criteria:

Dimension135
RecognitionAI doesn't know the brandPartial/vague knowledgeAccurate, detailed description
AccuracyWrong info / misattributionMostly right, minor gapsFully accurate and current
SentimentNegative or skepticalNeutralPositive with specific reasons
Category PresenceNever appears in category queriesOccasionally appearsConsistently in top 3
AuthorityNever cited as expertOccasionally mentionedRegularly cited for expertise
Competitive PositionDominated by competitorsOn parClearly leads in AI recommendations

Total: X/30

  • 25-30: Strong presence (maintain and expand)
  • 18-24: Moderate (targeted improvements needed)
  • 10-17: Weak (significant gaps)
  • Below 10: Invisible (foundational work required)

Phase 4: Gap Analysis & Recommendations

Classify each gap:

PriorityTriggerTimeline
CriticalFactual errors, misattribution, brand not recognizedFix now
HighWeak descriptions, missing from recommendations30 days
OpportunityAdjacent categories, founder thought leadership90 days

Recommendation categories:

Entity Clarity (Foundation):

  • Fix factual errors in source material AI trains on
  • Claim Google Knowledge Panel
  • Create AI-parseable "About" page with clear entity signals

Trust Signals:

  • 10+ reviews on G2, Capterra, or Google
  • Consistent directory listings
  • Structured schema markup (org, product, review)

Content Authority:

  • 3-5 answer-worthy articles targeting category questions directly
  • Wikipedia presence (if notable)
  • Founder bylines in authoritative publications

Competitive Gap:

  • If competitor dominates a category query → publish a direct comparison piece
  • If competitor appears in "[Brand] alternatives" → create better content targeting that query

Constraint: Never recommend keyword stuffing, fake reviews, or misleading schema. These tactics risk penalties and undermine genuine authority.


Phase 5: Self-Critique Pass (REQUIRED)

After completing the audit:

  • Did I run queries on at least 2 AI platforms, or only one?
  • Did I check for misattribution specifically (not just presence)?
  • Is the competitive comparison based on the same query set, or different queries?
  • Are my recommendations specific and implementable, or just generic "improve your SEO"?
  • Is the re-audit schedule set with specific dates and what to measure?
  • If prior audit exists: did I actually compare scores and show the trend?

Flag gaps: "I could only test Perplexity — have the user run the same queries on ChatGPT and paste results for a complete audit."


Phase 6: Re-Audit Schedule (MANDATORY)

Set specific re-audit dates before delivering:

30-day re-audit: After implementing critical fixes — did recognition improve? 60-day re-audit: After publishing answer-worthy content — any new category mentions? 90-day re-audit: Full comparative re-audit — full trend comparison to this baseline

Comparison table format for future audits:

| Dimension | [Baseline Date] | 30-Day | 60-Day | 90-Day | Δ |
|---|---|---|---|---|---|
| Recognition | [X/5] | | | | |
| Category | [X/5] | | | | |
| Authority | [X/5] | | | | |
| Total | [X/30] | | | | |

Output Structure

## AI Discoverability Audit: [Company] — [Date]

### Pre-Audit Hypothesis
[Prediction + reasoning]

---

### Phase 1: Direct Brand Queries
**ChatGPT:** [findings]
**Perplexity:** [findings]
**Claude:** [findings]
**Misattribution found:** [Yes/No — details]

### Phase 2: Category Queries
[Findings per query]

### Phase 3: Expertise Queries
[Findings]

### Competitive Comparison
[Table with real competitor names]

---

### Scores
| Dimension | Score |
|---|---|
| Recognition | /5 |
| Accuracy | /5 |
| Sentiment | /5 |
| Category Presence | /5 |
| Authority | /5 |
| Competitive Position | /5 |
| **TOTAL** | **/30** |

**Rating:** [Strong / Moderate / Weak / Invisible]

---

### Gap Analysis

**Critical (Fix Now):**
1. [Specific fix]

**High Priority (30 Days):**
1. [Specific fix]

**Opportunities (90 Days):**
1. [Specific improvement]

---

### Re-Audit Schedule
- 30-day: [YYYY-MM-DD] — measure: [what to check]
- 60-day: [YYYY-MM-DD] — measure: [what to check]
- 90-day: [YYYY-MM-DD] — full comparative re-audit

### Self-Critique Notes
[Any gaps, limitations, or things the user needs to run manually]

Skill by Brian Wagner | AI Marketing Architect | brianrwagner.com

Source

git clone https://github.com/BrianRWagner/ai-marketing-claude-code-skills/blob/main/ai-discoverability-audit/SKILL.mdView on GitHub

Overview

The AI Discoverability Audit evaluates how a brand is presented to AI systems like ChatGPT, Perplexity, Claude, and Gemini. It identifies gaps between your positioning and AI-generated descriptions, delivering an actionable plan and a re-audit schedule. This helps brands stay visible as AI-powered search and recommendations grow in importance.

How This Skill Works

Start with a pre-audit setup to collect essential inputs, then run structured AI queries across direct brand, category, and expertise prompts, plus competitive checks. Phase 3 applies a 1-5 scoring framework to produce an actionable plan and a defined re-audit schedule. If a prior audit exists, load it and frame the work as a comparison with trend insights.

When to Use It

  • You want to understand how your brand is described by AI systems (ChatGPT, Perplexity, Claude, Gemini) across markets.
  • You suspect misattribution or inaccurate AI descriptions and need a fixes plan.
  • You need a competitive view of AI presence versus top competitors and identify gaps.
  • A prior audit exists and you want a trend comparison and progress tracking.
  • You require an actionable audit report with a re-audit schedule to continually monitor AI discoverability.

Quick Start

  1. Step 1: Gather inputs – company name, URL, primary product/service, target customer, geography, top competitors, prior audits, and current positioning.
  2. Step 2: Run Phase 2 queries (Direct Brand, Category, Expertise) and perform competitive checks; document results and misattributions.
  3. Step 3: Apply Phase 3 scoring, draft an action plan, and schedule a re-audit to monitor progress.

Best Practices

  • Collect complete inputs upfront: company name, URL, core product, target customer, geography, top competitors, prior audits, and current positioning.
  • Run the standardized queries (direct brand, category, expertise) across AI systems and document results carefully.
  • Perform a competitive comparison matrix to spot where competitors dominate AI visibility.
  • Use the Phase 3 scoring rubric to quantify recognition, accuracy, sentiment, and authority.
  • Create a living action plan with concrete updates and a scheduled re-audit cadence.

Example Use Cases

  • A software vendor discovers partial AI recognition and misattribution; implements updated positioning and targeted category terms, improving AI descriptions in subsequent audits.
  • A local services brand sees competitors appear in AI results; adds geography-specific content and structured data to improve AI alignment.
  • An established enterprise uses a prior audit to track trendlines; after updates, AI-driven mentions rise and misattribution drops over two quarters.
  • A global brand with multi-region positioning uses language-specific queries to normalize AI descriptions across markets, reducing regional disparities.
  • A startup uses direct brand and competitive queries to reveal gaps in authority citations; builds a plan to increase credible sources and mentions in AI sources.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers