seo-monitor
npx machina-cli add skill jezweb/claude-skills/seo-monitor --openclawSEO Monitor
Batch check SEO health across all active Jezweb client sites. This is the weekly monitoring workflow — identifies sites with new issues, regressions, or stale crawl data.
Required MCP Tools
- Whispering Wombat:
whispering_crawls,whispering_issues - Brain:
brain_sites(to get active site list) - Google Chat (optional): Post summary to SEO Ideas space
Workflow
Step 1: Get Active Sites
Use Brain to find sites that need SEO monitoring:
brain_sites find { status: "active", limit: 100 }
If Brain is not available, use whispering_crawls list and extract unique domains from recent crawls.
Step 2: Check Each Site
For each site, check crawl freshness and health:
whispering_crawls list— find most recent crawl for this domain- Categorise:
- Fresh (crawled < 7 days ago): check issues only
- Stale (crawled > 7 days ago): start new crawl
- Never crawled: start first crawl
For fresh sites:
whispering_issues summary { crawl_id }— get issue counts- If previous crawl exists:
whispering_crawls compare— check for regressions
For stale sites:
whispering_crawls start { url, max_pages: 200 }— lighter crawl for monitoring- Note: don't wait for all crawls to complete — queue them and check later
Step 3: Identify Concerns
Flag sites that need attention:
Red flags (report immediately):
- New 5xx errors since last crawl
- Significant increase in broken links (>5 new)
- Previously indexed pages now noindexed
- SEO score dropped by >10 points
- Site returning errors/timeouts
Yellow flags (note for review):
- New 4xx errors (>3)
- Missing sitemaps
- New duplicate content
- Score declined by 5-10 points
Step 4: Generate Summary
Output a monitoring summary:
# SEO Monitoring Report — {date}
## Sites Needing Attention
| Site | Issue | Severity | Change |
|------|-------|----------|--------|
| example.com | 5 new broken links | Red | +5 since last week |
## Crawl Status
- Fresh (< 7 days): {N} sites
- Stale (> 7 days): {N} sites — crawls started
- Never crawled: {N} sites — crawls started
## Overall Health
- Sites with critical issues: {N}
- Sites with warnings: {N}
- Healthy sites: {N}
## New Crawls Started
{List of domains where fresh crawls were initiated}
Step 5: Communicate (Optional)
If Google Chat is available, post the summary to the SEO Ideas space.
Format for Google Chat (use their markdown):
*SEO Monitor — {date}*
*{N} sites need attention:*
• {domain} — {issue summary}
• {domain} — {issue summary}
_{N} fresh, {N} stale (crawls started), {N} healthy_
Configuration
Default monitoring parameters:
- Max pages per monitoring crawl: 200 (lighter than full audit)
- Stale threshold: 7 days
- Max concurrent crawl starts: 5 (don't overwhelm WW)
- Rate limit per crawl: 2 req/s
Notes
- Run this weekly (or trigger manually)
- Don't log individual issues to Brain during monitoring — only flag regressions
- If a site has critical regressions, suggest running
/seo-audit {domain}for full analysis - Keep the summary concise — it's a triage tool, not a full report
Source
git clone https://github.com/jezweb/claude-skills/blob/main/plugins/seo/skills/seo-monitor/SKILL.mdView on GitHub Overview
SEO Monitor batch-checks the health of all Jezweb active client sites on a weekly cycle. It identifies new issues, regressions, and data staleness by re-crawling stale sites and generating a concise summary for triage.
How This Skill Works
For each run, the workflow first pulls the active site list with brain_sites find status active; if Brain is unavailable, it derives domains from recent whispering_crawls. Sites are categorized as Fresh, Stale, or Never crawled; Fresh sites get issue summaries and regression checks against the previous crawl, while Stale and Never crawled sites start a new crawl (up to the configured limits) and are queued for processing. A summarized report lists red and yellow flags and can be posted to Google Chat when available.
When to Use It
- Weekly SEO health check across all active Jezweb client sites
- Refresh data when crawls are stale (>7 days)
- Detect new issues or regressions across the portfolio
- Triage sites before client reviews with a concise report
- Share status updates to Google Chat SEO Ideas space (optional)
Quick Start
- Step 1: Trigger the workflow with seo monitor or seo batch check
- Step 2: Brain finds active sites; if Brain is unavailable, fall back to whispering_crawls to extract domains
- Step 3: Review the generated SEO Monitoring Report and, if configured, post the summary to Google Chat's SEO Ideas space
Best Practices
- Run weekly to maintain a current health picture
- Leverage Brain for the active site list; fall back to whispering_crawls if Brain is down
- Queue crawls for stale or never crawled sites without waiting for all to complete
- Annotate red flags clearly and use supplementary audits for deep analysis
- Keep the summary concise and use the Google Chat format when available
Example Use Cases
- Example: weekly batch of 25 active sites uncovers 2 new 5xx issues
- Example: site A shows a >10 point SEO score drop since last crawl
- Example: several sites reported missing sitemaps; yellow flags trigger review
- Example: stale site kicks off a first light crawl under max_pages 200
- Example: Google Chat summary posted to SEO Ideas space with top issues