specstory-link-trail
Scannednpx machina-cli add skill specstoryai/agent-skills/specstory-link-trail --openclawSpecStory Link Trail
Reviews your .specstory/history sessions and creates a summary of all URLs that were fetched via WebFetch tool calls. Useful for auditing external resources accessed during development.
How It Works
- Parses SpecStory history files for WebFetch tool calls
- Extracts URLs, status codes, and context
- Groups by session with timestamps
- Separates successful fetches from failures
- Deduplicates repeated URLs with fetch counts
Why Track Links?
During AI-assisted coding, your assistant fetches documentation, APIs, and resources on your behalf. Link Trail helps you:
- Audit what external resources were accessed
- Find that documentation page you saw earlier
- Review failed fetches that might need retry
- Understand your research patterns
Usage
Slash Command
| User says | Script behavior |
|---|---|
/specstory-link-trail | All sessions in history |
/specstory-link-trail today | Today's sessions only |
/specstory-link-trail last session | Most recent session |
/specstory-link-trail 2026-01-22 | Sessions from specific date |
/specstory-link-trail *.md | Custom glob pattern |
Direct Script Usage
# All sessions
python skills/specstory-link-trail/parse_webfetch.py .specstory/history/*.md | \
python skills/specstory-link-trail/generate_report.py -
# Specific session
python skills/specstory-link-trail/parse_webfetch.py .specstory/history/2026-01-22*.md | \
python skills/specstory-link-trail/generate_report.py -
# Sessions from a date range
python skills/specstory-link-trail/parse_webfetch.py .specstory/history/2026-01-2*.md | \
python skills/specstory-link-trail/generate_report.py -
Output
Link Trail Report
=================
Sessions analyzed: 5
Total URLs fetched: 23 (18 successful, 5 failed)
Session: fix-authentication-bug (2026-01-22)
--------------------------------------------
Successful fetches:
- https://docs.github.com/en/rest/authentication (×2)
- https://jwt.io/introduction
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
Failed fetches:
- https://internal.company.com/api/docs (403 Forbidden)
Session: add-caching-layer (2026-01-21)
---------------------------------------
Successful fetches:
- https://redis.io/docs/latest/commands
- https://docs.python.org/3/library/functools.html#functools.lru_cache
- https://stackoverflow.com/questions/... (×3)
Summary by Domain
-----------------
github.com: 5 fetches
stackoverflow.com: 4 fetches
docs.python.org: 3 fetches
redis.io: 2 fetches
(9 other domains): 9 fetches
Present Results to User
The script output IS the report. Present it directly without additional commentary, but you may:
- Highlight key findings - Most frequently accessed domains, any failed fetches
- Offer follow-ups - "Want me to retry the failed fetches?" or "Need details on any of these?"
Example Response
Here's your link trail from recent sessions:
[script output here]
I noticed 5 failed fetches - mostly internal URLs that require authentication.
The most accessed domain was github.com (5 fetches), mostly for their REST API docs.
Would you like me to:
- Retry any of the failed fetches?
- Open any of these links?
- Filter to a specific session?
Notes
- Uses streaming parsing for large history files
- URLs are extracted from WebFetch tool calls in the history
- Fetch counts show when the same URL was accessed multiple times
- Failed fetches include the HTTP status code when available
Source
git clone https://github.com/specstoryai/agent-skills/blob/main/skills/specstory-link-trail/SKILL.mdView on GitHub Overview
SpecStory Link Trail analyzes your .specstory/history for WebFetch calls, extracting URLs, status codes, and the surrounding context. It groups results by session with timestamps, separates successful fetches from failures, and deduplicates repeated URLs to reveal auditing insights into external resources accessed during AI-assisted coding.
How This Skill Works
It parses SpecStory history files to locate WebFetch tool calls, extracting each URL, its HTTP status, and context. The data is grouped by session, labeled with timestamps, and reported as separate success and failure lists, with duplicates collapsed into counts. The pipeline is powered by parse_webfetch.py and generate_report.py to produce the Link Trail report.
When to Use It
- Audit all external resources accessed during a full SpecStory session
- View today's link activity after a coding session
- Review the most recent session's fetches
- Inspect fetches from a specific date or date range
- Filter to a subset of history with a glob pattern (e.g., *.md)
Quick Start
- Step 1: Select history to scan (e.g., .specstory/history/*.md)
- Step 2: Run the parse/generate pipeline to produce the Link Trail report
- Step 3: Review the report and decide on retries or follow-up actions
Best Practices
- Run after major coding sessions to capture comprehensive fetch activity
- Include both successful and failed fetches to identify gaps or authentication issues
- Use glob patterns to scope output to relevant history files
- Flag and retry failed fetches to ensure critical resources are accessible
- Review domain distribution to understand research patterns and dependencies
Example Use Cases
- Session fix-authentication-bug shows 18 successful and 5 failed fetches, highlighting authentication issues
- Session add-caching-layer reveals multiple domains accessed for docs and Python, aiding caching decisions
- Audit of internal API docs helps surface restricted resources that may require credentials
- Frequent access to github.com indicates reliance on REST API docs for integration work
- Detection of repeated fetches across sessions guides doc-keeping and reduces redundant lookups