Get the FREE Ultimate OpenClaw Setup Guide →

slop-detector

Scanned
npx machina-cli add skill athola/claude-night-market/slop-detector --openclaw
Files (1)
SKILL.md
7.1 KB

AI Slop Detection

AI slop is identified by patterns of usage rather than individual words. While a single "delve" might be acceptable, its proximity to markers like "tapestry" or "embark" signals generated text. We analyze the density of these markers per 100 words, their clustering, and whether the overall tone fits the document type.

Execution Workflow

Start by identifying target files and classifying them as technical docs, narrative prose, or code comments. This allows for context-aware scoring during analysis.

Language Detection

  • Auto-detect language from text content using function word frequency
  • Override with explicit --lang parameter (en, de, fr, es)
  • Load language-specific patterns from data/languages/{lang}.yaml
  • Fall back to English if detection confidence is low
  • See modules/language-support.md for details on cultural calibration

Vocabulary and Phrase Detection

Load: @modules/vocabulary-patterns.md

We categorize markers into three tiers based on confidence. Tier 1 words appear dramatically more often in AI text and include "delve," "multifaceted," and "leverage." Tier 2 covers context-dependent transitions like "moreover" or "subsequently," while Tier 3 identifies vapid phrases such as "In today's fast-paced world" or "cannot be overstated."

WordContextHuman Alternative
delve"delve into"explore, examine, look at
tapestry"rich tapestry"mix, combination, variety
realm"in the realm of"in, within, regarding
embark"embark on a journey"start, begin
beacon"a beacon of"example, model
spearheadedformal attributionled, started
multifaceteddescribing complexitycomplex, varied
comprehensivedescribing scopethorough, complete
pivotalimportance markerkey, important
nuancedsophistication signalsubtle, detailed
meticulous/meticulouslycare markercareful, detailed
intricatecomplexity markerdetailed, complex
showcasingdisplay verbshowing, displaying
leveragingbusiness jargonusing
streamlineoptimization verbsimplify, improve

Tier 2: Medium-Confidence Markers (Score: 2 each)

Common but context-dependent:

CategoryWords
Transition overusemoreover, furthermore, indeed, notably, subsequently
Intensity clusteringsignificantly, substantially, fundamentally, profoundly
Hedging stackspotentially, typically, often, might, perhaps
Action inflationrevolutionize, transform, unlock, unleash, elevate
Empty emphasiscrucial, vital, essential, paramount

Tier 3: Phrase Patterns (Score: 2-4 each)

PhraseScoreIssue
"In today's fast-paced world"4Vapid opener
"It's worth noting that"3Filler
"At its core"2Positional crutch
"Cannot be overstated"3Empty emphasis
"A testament to"3Attribution cliche
"Navigate the complexities"4Business speak
"Unlock the potential"4Marketing speak
"Treasure trove of"3Overused metaphor
"Game changer"3Buzzword
"Look no further"4Sales pitch
"Nestled in the heart of"4Travel writing cliche
"Embark on a journey"4Melodrama
"Ever-evolving landscape"4Tech cliche
"Hustle and bustle"3Filler

Step 3: Structural Pattern Detection

Load: @modules/structural-patterns.md

Em Dash Overuse

Count em dashes (—) per 1000 words:

  • 0-2: Normal human range
  • 3-5: Elevated, review usage
  • 6+: Strong AI signal
# Count em dashes in file
grep -o '—' file.md | wc -l

Tricolon Detection

AI loves groups of three with alliteration:

  • "fast, efficient, and reliable"
  • "clear, concise, and compelling"
  • "robust, reliable, and resilient"

Pattern: adjective, adjective, and adjective with similar sounds.

List-to-Prose Ratio

Count bullet points vs paragraph sentences:

  • >60% bullets: AI tendency
  • Emoji-led bullets: Strong AI signal in technical docs

Sentence Length Uniformity

Measure standard deviation of sentence lengths:

  • Low variance (SD < 5 words): AI monotony
  • High variance (SD > 10 words): Human variation

Paragraph Symmetry

AI produces "blocky" text with uniform paragraph lengths. Check if paragraphs cluster around the same word count.

Step 4: Sycophantic Pattern Detection

Especially relevant for conversational or instructional content:

PhraseIssue
"I'd be happy to"Servile opener
"Great question!"Empty validation
"Absolutely!"Over-agreement
"That's a wonderful point"Flattery
"I'm glad you asked"Filler
"You're absolutely right"Sycophancy

These phrases add no information and signal generated content.

Step 5: Calculate Slop Density Score

slop_score = (tier1_count * 3 + tier2_count * 2 + phrase_count * avg_phrase_score) / word_count * 100
ScoreRatingAction
0-1.0CleanNo action needed
1.0-2.5LightSpot remediation
2.5-5.0ModerateSection rewrite recommended
5.0+HeavyFull document review

Step 6: Generate Report

Output format:

## Slop Detection Report: [filename]

**Overall Score**: X.X / 10 (Rating)
**Word Count**: N words
**Markers Found**: N total

### High-Confidence Markers
- Line 23: "delve into" -> consider: "explore"
- Line 45: "rich tapestry" -> consider: "variety"

### Structural Issues
- Em dash density: 8/1000 words (HIGH)
- Bullet ratio: 72% (ELEVATED)
- Sentence length SD: 3.2 words (LOW VARIANCE)

### Phrase Patterns
- Line 12: "In today's fast-paced world" (vapid opener)
- Line 89: "cannot be overstated" (empty emphasis)

### Recommendations
1. Replace [specific word] with [alternative]
2. Convert bullet list at line 34-56 to prose
3. Vary sentence structure in paragraphs 3-5

Module Reference

  • See modules/fiction-patterns.md for narrative-specific slop markers
  • See modules/remediation-strategies.md for fix recommendations

Integration with Remediation

After detection, invoke Skill(scribe:doc-generator) with --remediate flag to apply fixes, or manually edit using the report as a guide.

Exit Criteria

  • All target files scanned
  • Density scores calculated
  • Report generated with actionable recommendations
  • High-severity items flagged for immediate attention

Source

git clone https://github.com/athola/claude-night-market/blob/master/plugins/scribe/skills/slop-detector/SKILL.mdView on GitHub

Overview

Slop detector flags AI-generated markers in documentation and prose by analyzing usage patterns. It measures marker density, clustering, and tone alignment to determine whether a text may be AI-assisted. Use it to review, clean, and audit prose quality; it should not be used to generate new content.

How This Skill Works

Technically, it starts by identifying target files and classifying them as technical docs, narrative prose, or code comments for context-aware scoring. It detects markers from Tier 1, Tier 2, and Tier 3 lists, measures density per 100 words, and evaluates clustering and tonal fit against the document type. It loads language-specific patterns from data/languages/{lang}.yaml and falls back to English when needed.

When to Use It

  • Reviewing technical or API documentation for AI-generated markers before publication
  • Cleaning up documentation or prose suspected to be AI-generated content
  • Auditing overall prose quality to ensure tone matches the document type
  • Pre-publishing QA to remove residual AI markers from draft content
  • Multilingual or language-specific doc reviews using language-pattern data

Quick Start

  1. Step 1: Identify target files and classify them as technical docs, narrative prose, or code comments
  2. Step 2: Run the detector to score markers and density per 100 words
  3. Step 3: Review flagged sections and apply remediation strategies using the remediation module

Best Practices

  • Start by identifying target files and classify them as technical docs, narrative prose, or code comments
  • Use the Tier 1/2/3 marker lists and density per 100 words to score segments
  • Review flagged markers in context, considering clustering and proximity to other markers
  • Pair automated results with human judgment and apply remediation steps
  • Keep language-pattern data updated (data/languages/{lang}.yaml) and refresh periodically

Example Use Cases

  • A technical API guide shows Tier 1 markers like 'delve' near 'tapestry', triggering a flag for proximity concerns
  • A product doc contains phrases such as 'In today's fast-paced world' and is flagged as a vapid marker example
  • A design whitepaper exhibits overuse of transitions like 'moreover' and 'subsequently', prompting review
  • A marketing spec is cleaned of phrases like 'unlock the potential' and 'game changer' to tighten tone
  • Multilingual docs are scanned with language-pattern checks to tailor AI-marker detection to each language

Frequently Asked Questions

Add this skill to your agents

Related Skills

Sponsor this space

Reach thousands of developers