learn
npx machina-cli add skill agenticnotetaking/arscontexta/learn --openclawEXECUTE NOW
Topic: $ARGUMENTS
Parse immediately:
- If topic provided: research that topic
- If topic empty: read
self/goals.mdfor highest-priority unexplored direction and propose it - If topic includes
--deep/--light/--moderate: force that depth, strip flag from topic - If no topic and no goals.md: ask "What would you like to research?"
Steps:
- Read config — tool preferences, depth, domain vocabulary
- Determine depth — from flags, config default, or fallback to moderate
- Research — tool cascade: primary → fallback → last resort
- File to inbox — with full provenance metadata
- Chain to processing — next step based on pipeline chaining mode
- Update goals.md — append new research directions discovered
START NOW. Reference below explains methodology.
Step 1: Read Configuration
ops/config.yaml — research tools, depth, pipeline chaining
ops/derivation-manifest.md — domain vocabulary (inbox folder, reduce skill name)
From config.yaml (defaults if missing):
research:
primary: exa-deep-research # exa-deep-research | exa-web-search | web-search
fallback: exa-web-search
last_resort: web-search
default_depth: moderate # light | moderate | deep
pipeline:
chaining: suggested # manual | suggested | automatic
From derivation-manifest.md (universal defaults if missing):
- Inbox folder:
inbox/(could bejournal/,encounters/, etc.) - Reduce skill name:
/reduce(could be/surface,/break-down, etc.) - Domain name and hub MOC name
Step 2: Determine Depth
Priority: explicit flag > config default > moderate
| Depth | Tool | Sources | Duration | Use When |
|---|---|---|---|---|
| light | WebSearch | 2-3 | ~5s | Checking a specific fact |
| moderate | mcp__exa__web_search_exa | 5-8 | ~10-30s | Exploring a subtopic |
| deep | mcp__exa__deep_researcher_start | Comprehensive | 15s-3min | Major research direction |
Step 3: Research — Tool Cascade
Output header:
Researching: [topic]
Depth: [depth]
Using: [tool name]
Try tools in config priority order. If a tool fails (MCP unavailable, error, empty results), fall to next tier. If ALL tiers fail:
FAIL: Research failed — no research tools available
Tried:
1. [primary] — [error]
2. [fallback] — [error]
3. WebSearch — [error]
Try again later or manually add research to [inbox-folder]/
Tool Invocation Patterns
exa-deep-research:
mcp__exa__deep_researcher_start
instructions: "Research comprehensively: [topic]. Focus on practical findings, key patterns, recent developments, and actionable insights."
model: "exa-research-fast" (moderate) | "exa-research" (deep)
Poll with mcp__exa__deep_researcher_check until completed. Output during wait:
Research ID: [id]
Waiting for results...
exa-web-search:
mcp__exa__web_search_exa query: "[topic]" numResults: 8
web-search (last resort, also used for light depth):
WebSearch query: "[topic]"
On completion: Research complete — [source count] sources analyzed
Step 4: File Results to Inbox
Filename: YYYY-MM-DD-[slugified-topic].md — lowercase, spaces to hyphens, no special chars.
Write to the domain inbox folder (from derivation-manifest, default inbox/). Create folder if missing.
Provenance Frontmatter
Every field serves the provenance chain. The exa_prompt field is most critical — it captures the intellectual context that shaped the research.
---
description: [1-2 sentence summary of key findings]
source_type: exa-deep-research | exa-web-search | web-search
exa_prompt: "[full query/instruction string sent to the research tool]"
exa_research_id: "[deep researcher ID, omit for web search]"
exa_model: "[exa-research-fast | exa-research, omit for web search]"
exa_tool: "[mcp tool name, omit for deep researcher]"
generated: [ISO 8601 timestamp — run: date -u +"%Y-%m-%dT%H:%M:%SZ"]
domain: "[domain name from derivation-manifest]"
topics: ["[[domain-hub-moc]]"]
---
Include only the fields relevant to the tool used:
- Deep researcher:
source_type,exa_prompt,exa_research_id,exa_model,generated,domain,topics - Exa web search:
source_type,exa_prompt,exa_tool,generated,domain,topics - WebSearch:
source_type,exa_prompt,exa_tool,generated,domain,topics
Body Structure
Format for downstream reduce extraction — findings as clear propositions, not raw dumps:
# [Topic Title]
## Key Findings
[Synthesized findings organized by theme, not by source. Each finding
should be a clear proposition the reduce phase can extract as an atomic insight.]
## Sources
[List of sources with titles and URLs]
## Research Directions
[New questions, unexplored angles, follow-up topics. These feed goals.md.]
Step 5: Chain to Processing
Read chaining mode from config (default: suggested).
Research complete
Filed to: [inbox-folder]/[filename]
Next: /[reduce-skill-name] [inbox-folder]/[filename]
Append based on mode:
- manual: (nothing extra)
- suggested:
Ready for processing when you are. - automatic: Replace "Next" line with
Queued for /[reduce-skill-name] -- processing will begin automatically.
Step 6: Update goals.md
If self/goals.md exists AND the research uncovered meaningful new directions:
- Read goals.md, match existing format
- Append under the appropriate section:
- [New direction] (discovered via /learn: [original topic])
Skip silently if goals.md missing or no meaningful directions found. Do not add filler.
Output Summary
Clean output wrapping the full flow:
ars contexta
Researching: [topic]
Depth: [depth]
Using: [tool name]
[Research ID: abc-123]
Research complete -- [N] sources analyzed
Filed to: [inbox-folder]/[filename]
Next: /[reduce-skill-name] [inbox-folder]/[filename]
[chaining context]
[goals.md updated with N new research directions]
Error Handling
| Error | Behavior |
|---|---|
| No topic, no goals.md | Ask: "What would you like to research?" |
| Exa MCP unavailable | Fall through cascade to WebSearch |
| All tools fail | Report failures with FAIL status, suggest manual inbox filing |
| Deep researcher timeout (>5 min) | Report timeout, suggest --moderate |
| Empty results | Report "No results found", suggest refining topic |
| Config files missing | Use defaults silently |
| Inbox folder missing | Create it before writing |
Skill Selection Routing
After /learn, the self-building loop continues:
| Phase | Skill | Purpose |
|---|---|---|
| Extract insights | /[reduce-name] | Mine research for atomic propositions |
| Find connections | /[reflect-name] | Link new insights to existing graph |
| Update old notes | /[reweave-name] | Backward pass on touched notes |
| Quality check | /[verify-name] | Description quality, schema, links |
/learn is the entry point. Each run feeds the graph, and the graph feeds the next direction through goals.md.
Source
git clone https://github.com/agenticnotetaking/arscontexta/blob/main/skill-sources/learn/SKILL.mdView on GitHub Overview
learn researches a topic to grow your knowledge graph by leveraging Exa deep researcher, web search, or basic search. It files results with full provenance and chains outcomes into the processing pipeline, triggering on /learn, /learn [topic], research this, or find out about.
How This Skill Works
When invoked, learn reads configuration to determine tool depth and preferences, then performs a cascade search (deep researcher, then web search, then last-resort WebSearch). Results are saved to the inbox with provenance metadata and routed to the next processing step based on the pipeline mode, followed by updating goals.md with new directions.
When to Use It
- Research a specific topic provided as the argument.
- No topic given: derive the next high-priority direction from goals.md.
- Force depth with --deep, --moderate, or --light flags.
- Require results to be saved with full provenance for auditing the knowledge graph.
- Chain the research output to the next processing step per the configured pipeline.
Quick Start
- Step 1: Trigger with /learn [topic] or /learn to pick the top unexplored direction.
- Step 2: Learn reads config, sets depth, and runs the tool cascade (deep researcher → web search → WebSearch).
- Step 3: Results are saved to inbox with provenance and chained to the next processing step; goals.md is updated.
Best Practices
- Define a precise topic and scope before triggering learn.
- Prefer explicit depth (deep/moderate/light) to control effort and time.
- Ensure the inbox file includes complete provenance frontmatter.
- Review and align domain vocabulary in derivation-manifest to improve consistency.
- Update goals.md with new research directions discovered during the run.
Example Use Cases
- Research 'neural architecture search' using deep depth and file to inbox with provenance; then chain to synthesis pipeline.
- Explore 'blockchain scalability' with moderate depth and attach findings to the knowledge graph with provenance.
- No topic provided: propose the top unexplored direction from goals.md and start research.
- Force light depth to quickly fact-check a specific claim and log results.
- After results are saved, route them into the next processing step (e.g., summarize or reduce) according to the pipeline.