defi-data-fetcher
Scannednpx machina-cli add skill auralshin/agent-skills/defi-data-fetcher --openclawDeFi Data Fetcher
Purpose
Collect DeFi metrics from prioritized sources, normalize them, reconcile cross-source conflicts, and return a source-attributed dataset with freshness and confidence labels.
Use this skill when
- The user asks for current or historical DeFi metrics (TVL, APY, volume, fees, revenue, token prices).
- The user wants protocol/token comparisons across chains.
- The user needs a clean dataset before risk or strategy analysis.
Do not use this skill when
- The task is transaction signing or broadcasting.
- The task is pure protocol economic risk scoring (use
defi-risk-evaluator).
External dependency profile
- Dependency level: High for live/current metrics.
- Primary sources: protocol-native APIs/subgraphs and official analytics.
- Secondary sources: DeFiLlama and market data aggregators.
- Validation/backfill: direct RPC reads.
- Offline fallback: supports normalization/reconciliation/reporting on user-provided snapshots only.
Workflow
- Clarify query scope:
- Protocols/tokens/chains
- Time window (latest, 24h, 7d, custom)
- Required metrics
- Build source plan with
references/source-priority.md. - Fetch using ordered providers and keep retrieval timestamps.
- Normalize fields/units via
references/metric-definitions.md. - Apply freshness policy from
references/freshness-sla.md. - Reconcile conflicts (median + spread analysis) and assign confidence.
- If live fetch is unavailable, switch to
references/offline-fallback.mdmode and state limits. - Return required schema.
Data quality rules
- Always separate
apy_baseandapy_reward. - Percentages are decimal internally (
0.12= 12%). - All timestamps must be UTC ISO-8601.
- Never hide source disagreement; show spread and confidence.
- Explicitly flag stale or partial coverage.
Required output format
{
"query_scope": {
"protocols": ["string"],
"chains": ["string"],
"time_window": "string",
"requested_metrics": ["string"]
},
"fetch_mode": "live|offline_snapshot",
"source_plan": {
"primary": ["string"],
"secondary": ["string"],
"validation": ["string"]
},
"metrics": [
{
"metric": "tvl_usd|apy_base|apy_reward|volume_24h_usd|fees_24h_usd|revenue_24h_usd|price_usd",
"entity": "protocol_or_token",
"chain": "string",
"value": 0,
"as_of": "ISO-8601",
"freshness_status": "fresh|stale|unknown",
"confidence": "high|medium|low",
"spread_pct": 0,
"sources": ["string"]
}
],
"reconciliation_notes": ["string"],
"quality_flags": ["string"],
"summary": "2-4 sentence summary"
}
Bundled resources
references/metric-definitions.md: Canonical metric semantics.references/source-priority.md: Source ranking and failover policy.references/freshness-sla.md: Metric-specific freshness thresholds.references/offline-fallback.md: Behavior when live providers are unavailable.scripts/normalize_metrics.py: Deterministic normalization + optional reconciliation mode.
Use scripts/normalize_metrics.py --reconcile when you have multiple rows per metric/entity/chain and need consistent confidence/spread outputs.
Source
git clone https://github.com/auralshin/agent-skills/blob/main/skills/defi-data-fetcher/SKILL.mdView on GitHub Overview
DeFi Data Fetcher collects critical DeFi metrics (TVL, APY, volume, fees, prices) from prioritized sources, normalizes them, reconciles cross-source conflicts, and returns a source-attributed dataset with explicit freshness and confidence metadata. It supports protocol/token comparisons across chains and provides a clean dataset for risk or strategy analysis.
How This Skill Works
The skill clarifies the query scope, builds a prioritized source plan, fetches data from live providers, normalizes fields using canonical definitions, applies freshness policies, and reconciles conflicts with median and spread analysis to assign confidence. If live data is unavailable, it switches to an offline-fallback mode and reports limits, returning a structured schema with source attributions.
When to Use It
- You need current or historical DeFi metrics (TVL, APY, volume, fees, revenue, prices) with explicit source attributions.
- You want protocol/token comparisons across multiple chains.
- You require a clean, normalized dataset before risk or strategy analysis.
- Data sources conflict; you need transparent spread and confidence labels.
- Live fetch is unavailable and you need offline fallback while state limits are disclosed.
Quick Start
- Step 1: Clarify query scope (protocols/tokens, chains, time_window, required metrics).
- Step 2: Build a source plan and fetch data from prioritized providers, recording retrieval timestamps.
- Step 3: Normalize fields, apply freshness SLA, reconcile conflicts, and output the required schema with source attributions.
Best Practices
- Define scope clearly: specify protocols/tokens, chains, time window, and required metrics.
- Use canonical metric definitions and keep apy_base and apy_reward separate.
- Ensure all timestamps are in UTC ISO-8601 and surface freshness/confidence explicitly.
- Don’t hide source disagreements; present spread and confidence for each metric.
- Have a fallback plan (offline mode) and validate results if live data is unavailable.
Example Use Cases
- Compare 7d TVL, APY_base, and 24h volume for top DeFi protocols on Ethereum vs Polygon.
- Aggregate price_usd, volume_24h_usd, and revenue_24h_usd for multiple tokens across chains with source attributions.
- Generate historical snapshots of TVL and revenue for key lending protocols with freshness labels.
- Reconcile data across protocol-native APIs and DeFiLlama to produce a single confidence-scored dataset.
- Use offline fallback to produce a clean dataset from user-provided snapshots when live sources fail.