detect
npx machina-cli add skill synaptiai/agent-capability-standard/detect --openclawIntent
Scan data sources to determine whether a specified pattern, entity, or condition is present. Detection is binary (present/absent) with associated signal strength.
Success criteria:
- Clear boolean determination of presence/absence
- At least one evidence anchor for positive detections
- False positive risk assessment provided
- Confidence score justified by evidence quality
Compatible schemas:
schemas/output_schema.yaml
Inputs
| Parameter | Required | Type | Description |
|---|---|---|---|
target | Yes | string|object | The data source to scan (file path, URL, or structured data) |
pattern | Yes | string|regex | The pattern, entity type, or condition to detect |
threshold | No | object | Detection sensitivity settings (e.g., min_matches, confidence_floor) |
scope | No | string | Limit search to specific regions (e.g., "functions", "imports", "comments") |
Procedure
-
Define detection criteria: Clarify exactly what constitutes a positive detection
- Convert vague patterns to concrete search terms or regex
- Establish minimum evidence threshold for positive detection
-
Scan target systematically: Search the target data for matching signals
- Use Grep for text patterns with appropriate flags (-i for case-insensitive, etc.)
- Use Read for structural inspection when pattern requires context
- Record location (file:line) for each potential match
-
Evaluate signal strength: For each match, assess how strongly it indicates true presence
- Strong: exact match with clear context
- Medium: partial match or ambiguous context
- Weak: possible match requiring human verification
-
Assess false positive risk: Determine likelihood that detections are spurious
- High risk: generic patterns, noisy data, few matches
- Low risk: specific patterns, clean data, multiple corroborating signals
-
Ground claims: Attach evidence anchors to all detection signals
- Format:
file:linefor file-based targets - Include snippet of matched content for verification
- Format:
-
Format output: Structure results according to the output contract below
Output Contract
Return a structured object:
detected: boolean # True if pattern/entity was found
target_type: string # What was being detected (e.g., "security vulnerability", "deprecated API")
signals:
- signal: string # Description of what was found
strength: low | medium | high # Signal strength
location: string # file:line or path where found
false_positive_risk: low | medium | high # Likelihood of spurious detection
confidence: number # 0.0-1.0 based on evidence quality
evidence_anchors: array[string] # File:line references
assumptions: array[string] # Explicit assumptions made
next_actions: array[string] # 0-3 suggested follow-ups if uncertain
Field Definitions
| Field | Type | Description |
|---|---|---|
detected | boolean | True if the target pattern/entity was found |
target_type | string | Category of what was searched for |
signals | array | List of individual detection signals with strength |
false_positive_risk | enum | Assessment of detection reliability |
confidence | number | 0.0-1.0 based on evidence completeness |
evidence_anchors | array[string] | File:line references or tool outputs |
assumptions | array[string] | Explicitly stated assumptions |
Examples
Example 1: Detect SQL Injection Vulnerability
Input:
target: src/api/user_controller.py
pattern: "SQL injection pattern"
threshold:
min_matches: 1
Output:
detected: true
target_type: "SQL injection vulnerability"
signals:
- signal: "String concatenation in SQL query with user input"
strength: high
location: "src/api/user_controller.py:47"
- signal: "Raw SQL execution without parameterization"
strength: medium
location: "src/api/user_controller.py:52"
false_positive_risk: low
confidence: 0.85
evidence_anchors:
- "src/api/user_controller.py:47"
- "src/api/user_controller.py:52"
assumptions:
- "User input flows from request.params to query variable"
- "No input sanitization in calling function"
next_actions:
- "Verify data flow from user input to SQL query"
- "Check for parameterized query alternatives in codebase"
Evidence pattern: Grep for SQL keywords combined with string formatting patterns, then Read to verify context.
Example 2: Detect Deprecated API Usage
Input:
target: src/
pattern: "componentWillMount|componentWillReceiveProps"
scope: "*.jsx,*.tsx"
Output:
detected: true
target_type: "deprecated React lifecycle method"
signals:
- signal: "componentWillMount usage"
strength: high
location: "src/components/Dashboard.jsx:23"
false_positive_risk: low
confidence: 0.95
evidence_anchors:
- "src/components/Dashboard.jsx:23"
assumptions:
- "Project uses React 16.3+ where these methods are deprecated"
next_actions:
- "Migrate to componentDidMount or useEffect hook"
Verification
- Output contains
detectedboolean matching signal presence - At least one evidence anchor exists for positive detections
- Confidence score correlates with number and strength of signals
- All referenced file:line locations are valid and accessible
- False positive risk assessment is justified
Verification tools: Read (to verify file:line references exist)
Safety Constraints
mutation: falserequires_checkpoint: falserequires_approval: falserisk: low
Capability-specific rules:
- Do not access paths outside the specified target scope
- Do not modify any files during detection
- Stop and request clarification if detection criteria are ambiguous
- Report uncertainty rather than guessing when signals are weak
Composition Patterns
Commonly follows:
inspect- Detection often follows initial observation of a systemsearch- Detection may refine broad search results
Commonly precedes:
identify- Positive detection often leads to classification of what was foundestimate-risk- Detection of anomalies feeds into risk assessmentplan- Detected issues may trigger remediation planning
Anti-patterns:
- Never use detect for complex entity classification (use
identifyinstead) - Avoid detect for quantitative assessment (use
estimateinstead)
Workflow references:
- See
reference/composition_patterns.md#risk-assessmentfor detection in risk workflows - See
reference/composition_patterns.md#observe-model-actfor detection in agentic loops
Source
git clone https://github.com/synaptiai/agent-capability-standard/blob/main/skills/detect/SKILL.mdView on GitHub Overview
Detect scans data sources to determine if a specified pattern, entity, or condition is present. It returns a boolean result with evidence anchors and a confidence score to justify detections and assess false positives.
How This Skill Works
Define detection criteria (target, pattern, and optional threshold), then scan the target with Grep for text patterns or Read for structural inspection. For each match, evaluate signal strength, attach file:line evidence, and format results according to the output contract with fields like detected, target_type, signals, false_positive_risk, and confidence.
When to Use It
- Confirm a specific error pattern in logs or reports.
- Check for required API usage or config presence in code or data.
- Detect security-related signals or indicators in a data source.
- Find deprecated APIs or patterns in a codebase.
- Validate data contains essential fields before processing.
Quick Start
- Step 1: Define target, pattern, and an optional threshold for detection.
- Step 2: Run detection using Grep for text patterns and Read for structural checks on the target.
- Step 3: Review the output: detected, target_type, signals, evidence_anchors, and false_positive_risk.
Best Practices
- Define explicit detection criteria and a sensible threshold before scanning.
- Craft precise patterns or regex with appropriate flags (e.g., -i for case-insensitive).
- Record evidence anchors (file:line) and include a content snippet for verification.
- Assess false positive risk using signal strength categories (high/medium/low).
- Use Read when pattern requires contextual or structural verification in the target.
Example Use Cases
- Detecting a specific error message in server logs to trigger alerts.
- Verifying presence of a deprecated API call in a codebase for cleanup.
- Identifying insecure configurations or secrets in infrastructure/config files.
- Confirming required tokens or keys exist in configuration data before deployment.
- Locating feature flags or experimental switches enabled in source files.