Get the FREE Ultimate OpenClaw Setup Guide →

detect-ai

npx machina-cli add skill humanizerai/agent-skills/detect-ai --openclaw
Files (1)
SKILL.md
1.9 KB

Detect AI Content

Analyze text to determine if it was written by AI using the HumanizerAI API.

How It Works

When the user invokes /detect-ai, you should:

  1. Extract the text from $ARGUMENTS
  2. Call the HumanizerAI API to analyze the text
  3. Present the results in a clear, actionable format

API Call

Make a POST request to https://humanizerai.com/api/v1/detect:

Authorization: Bearer $HUMANIZERAI_API_KEY
Content-Type: application/json

{
  "text": "<user's text>"
}

API Response Format

The API returns JSON like this:

{
  "score": {
    "overall": 82,
    "perplexity": 96,
    "burstiness": 15,
    "readability": 23,
    "satPercent": 3,
    "simplicity": 35,
    "ngramScore": 8,
    "averageSentenceLength": 21
  },
  "wordCount": 82,
  "sentenceCount": 4,
  "verdict": "ai"
}

IMPORTANT: The main AI score is score.overall (not score directly). This is the score to display to the user.

Present Results Like This

## AI Detection Results

**Score:** [score.overall]/100 ([verdict])
**Words Analyzed:** [wordCount]

### Metrics
- Perplexity: [score.perplexity]
- Burstiness: [score.burstiness]
- Readability: [score.readability]
- N-gram Score: [score.ngramScore]

### Recommendation
[Based on score.overall, suggest whether to humanize]

Score Interpretation (use score.overall)

  • 0-20: Human-written content
  • 21-40: Likely human, minor AI patterns
  • 41-60: Mixed signals, could be either
  • 61-80: Likely AI-generated
  • 81-100: Highly likely AI-generated

Error Handling

If the API call fails:

  1. Check if HUMANIZERAI_API_KEY is set
  2. Suggest the user get an API key at https://humanizerai.com
  3. Provide the error message for debugging

Source

git clone https://github.com/humanizerai/agent-skills/blob/main/skills/detect-ai/SKILL.mdView on GitHub

Overview

Detect AI-written content by analyzing text and returning a detailed 0-100 score. It leverages the HumanizerAI API to assess AI likelihood and provide metrics like perplexity, burstiness, and readability. This helps reviewers decide if content should be humanized before publishing or submitting.

How This Skill Works

On invocation, the skill extracts the input text and sends it to https://humanizerai.com/api/v1/detect via POST with a Bearer API key and the text payload. The API responds with a structured score (score.overall) plus metrics such as perplexity, burstiness, readability, and n-gram score. The skill then presents the results in a clear, actionable format for quick decision making.

When to Use It

  • Before publishing blog posts, marketing copy, or reports to ensure authenticity
  • Review user-submitted content to flag potential AI-generated material
  • Fact-check editorial drafts that may have AI assistance
  • Assess content quality in submissions for competitions or applications
  • Quality control in automated content pipelines (newsletters, summaries)

Quick Start

  1. Step 1: Prepare the text you want to analyze
  2. Step 2: Call the API endpoint https://humanizerai.com/api/v1/detect with your text and API key
  3. Step 3: Review the overall score and key metrics to decide on edits

Best Practices

  • Always display score.overall as the primary indicator
  • Verify that HUMANIZERAI_API_KEY is set before each run
  • For long texts, analyze in chunks or run multiple passes
  • Cross-check with human judgment rather than relying solely on the score
  • Use the metrics (perplexity, readability, etc.) to guide edits

Example Use Cases

  • A blog draft flagged as AI-written is revised to improve voice and originality
  • Marketing email copy is adjusted after a high AI-detection score
  • Student assignment authenticity checked before submission
  • Newsletter summaries refined to sound more human
  • Product descriptions audited for natural language after automated generation

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers