Get the FREE Ultimate OpenClaw Setup Guide →

cog-url-extraction

npx machina-cli add skill a5c-ai/babysitter/url-extraction --openclaw
Files (1)
SKILL.md
1.3 KB

COG URL Extraction Skill

Save URLs and automatically extract insights including title, summary, key takeaways, and credibility assessment.

Capabilities

  • Fetch and analyze URLs for key insights
  • Extract: title, summary, key takeaways, relevant quotes
  • Assess source credibility and assign confidence levels
  • Classify extracted insights by domain
  • Route insights to appropriate vault sections
  • Create cross-references to related vault entries

Tool Use Instructions

  1. Use web-fetch to retrieve URL content
  2. Extract key insights: title, summary, takeaways, quotes
  3. Assess source credibility using domain reputation and content quality
  4. Use file-read to find related existing vault entries
  5. Use file-write to store extracted insights in markdown format
  6. Add cross-references to related entries
  7. Use git-commit to commit extracted insights

Examples

{
  "vaultPath": "./cog-vault",
  "captureType": "url-dump",
  "urls": [
    "https://example.com/interesting-article",
    "https://blog.example.com/deep-dive-post"
  ],
  "targetQuality": 80
}

Source

git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/cog-second-brain/skills/url-extraction/SKILL.mdView on GitHub

Overview

Cog URL Extraction saves URLs and automatically extracts insights such as title, summary, key takeaways, and relevant quotes. It also assesses source credibility, classifies insights by domain, and routes them into the appropriate vault sections with cross-references to related entries.

How This Skill Works

The skill uses web-fetch to retrieve page content, then extracts key insights including title, summary, takeaways, and quotes. It assesses credibility based on domain reputation and content quality, routes the result to the correct vault section, and stores the output in Markdown via file-write, with cross-references and a final git-commit.

When to Use It

  • You’re bookmarking articles or reports for a research project and want structured takeaways.
  • Curating credible sources for a report or briefing and needing domain-based classification.
  • Building a personal knowledge vault with cross-referenced entries from diverse domains.
  • Saving blog posts or opinion pieces where credibility varies, to tag with confidence levels.
  • Organizing quotes and insights from sources to support a narrative or argument, with routing to vault sections.

Quick Start

  1. Step 1: Use web-fetch to retrieve the URL content.
  2. Step 2: Extract title, summary, takeaways, quotes, and assess credibility.
  3. Step 3: Use file-write to save insights as Markdown in the vault and git-commit to record the change.

Best Practices

  • Check the URL returns content before processing to avoid dead links.
  • Prefer automated metadata extraction first, then manual review for edge cases.
  • Use domain reputation and content quality signals for credibility scoring.
  • Cross-link new extractions with existing vault entries to build a connected graph.
  • Store outputs in markdown in a consistent template to ease search and reuse.

Example Use Cases

  • https://example.com/ai-safety-briefing – extracted title, concise summary, takeaways, quotes, credibility score, routed to AI vault.
  • https://blog.example.com/deep-dive-prompting – domain-classified insights with cross-references to existing vault entries.
  • https://journal.org/research-2024-cta – Markdown notes with structured metadata and cross-links.
  • https://news.site/tech-opinion – quotes captured with credibility tag and vault routing.
  • https://academic.edu/paper123 – structured metadata and key takeaways in vault section 'Research'.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers