Get the FREE Ultimate OpenClaw Setup Guide →

cache-docs

npx machina-cli add skill eidetics/claude-eidetic/cache-docs --openclaw
Files (1)
SKILL.md
1.3 KB

/cache-docs

Cache external docs so future queries use search_documents (~20 tokens/result) instead of re-fetching (~5K+ tokens).

Usage:

  • /eidetic:cache-docs <library>
  • /eidetic:cache-docs <library> <topic>

Step 1: Parse Arguments

Extract library (required) and topic (optional). If no argument, ask: "Which library's docs would you like to cache?"

Step 2: Check Existing Cache

search_documents(query="overview", library="<LIBRARY>")
  • Results found and fresh: inform user, offer to refresh.
  • Results found but stale: proceed to refresh.
  • No results: proceed to Step 3.

Step 3: Resolve Library

resolve-library-id(libraryName="<LIBRARY>")

Pick the best matching ID.

Step 4: Fetch Docs

query-docs(libraryId="<LIBRARY_ID>", topic="<TOPIC or 'getting started'>")

Step 5: Cache

index_document(
  content="<FETCHED_CONTENT>",
  source="context7:<LIBRARY_ID>/<TOPIC>",
  library="<LIBRARY>",
  topic="<TOPIC>",
  ttlDays=7
)

Step 6: Verify

search_documents(query="<TOPIC or library>", library="<LIBRARY>", limit=3)

Report: chunks cached, TTL, search command.

Note: repeat Steps 4-5 with different topics to cache multiple topics for the same library.

Source

git clone https://github.com/eidetics/claude-eidetic/blob/main/plugin/plugins/claude-eidetic/skills/cache-docs/SKILL.mdView on GitHub

Overview

cache-docs caches fetched external documentation locally so future queries can use search_documents instead of re-fetching large content. This reduces token usage by serving small, indexed chunks (~20 tokens per result) instead of pulling thousands of tokens at query time.

How This Skill Works

It parses the library and optional topic, checks the cache with a search_documents call, resolves the library to a library ID, fetches docs with query-docs, and stores them via index_document with a 7-day TTL. It finishes by verifying the cached content with a final search_documents query to confirm the topic or library is cached.

When to Use It

  • You frequently answer questions about a single library and want faster responses than live fetches.
  • Network access is slow or expensive, making repeated fetches unattractive.
  • You want to reduce token costs by serving cached content instead of re-fetching.
  • You need consistent results for common topics without hitting external sources each time.
  • You plan to cache multiple topics for the same library over time.

Quick Start

  1. Step 1: Issue /eidetic:cache-docs <library> <topic> (topic is optional).
  2. Step 2: The skill resolves the library, fetches docs via query-docs, and caches them with a 7-day TTL.
  3. Step 3: Verify cached content by running search_documents(query="<TOPIC>" or library, library="<LIBRARY>", limit=3).

Best Practices

  • Always specify a library and, if possible, a focused topic to maximize cache utility.
  • Prioritize core topics like 'getting started' or 'overview' to bootstrap useful caches.
  • Refresh stale caches proactively and verify freshness with a search_documents check.
  • Align TTL with your update cadence; TTL defaults to 7 days but can be adjusted.
  • Periodically spot-check cached content against source docs to catch drift or changes.

Example Use Cases

  • Cache Python standard library docs to speed up a Python helper assistant.
  • Cache React or Vue docs for a frontend development assistant.
  • Cache NumPy docs for data science help and numerical computing tasks.
  • Cache PostgreSQL docs for a database administration assistant.
  • Cache TensorFlow or PyTorch docs for ML model guidance.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers