Get the FREE Ultimate OpenClaw Setup Guide →

book-review

Scanned
npx machina-cli add skill swathidbhat/Claude-Skills/non-fiction-book-review --openclaw
Files (1)
SKILL.md
6.6 KB

Book Review Skill

Introduction

Purpose

This skill helps readers document their experience with a book — not produce a generic summary.

A book review should capture what you noticed, what your mind caught onto, what resonated with your context. The interactive interview exists because recall is the point: articulating what you remember cements learning. AI fills gaps, not replaces thinking.

The goal is threefold:

  1. Prompt active recall — force articulation before showing what was highlighted
  2. Surface blind spots — compare stated understanding against actual annotations
  3. Document the reader's experience — produce a summary that reflects their perspective, not a Wikipedia entry

This skill should reduce over-reliance on AI by making the user do the work first.

When to Apply

Trigger this skill when:

  • User mentions wanting a book review, book summary, or book notes
  • User provides reading data: highlights, annotations, bookmarks, or notes
  • Format can be anything: JSON, HTML, CSV, export, plain copy-paste

Do not trigger for:

  • Fiction books (this workflow assumes non-fiction with arguments/themes)
  • Requests for a summary without the user's own reading data
  • Academic citation or bibliography tasks

What This Skill Governs

Workflow

1. Intake

Parse the user's reading data. Extract:

  • Highlights/quotes
  • Bookmarks (chapter locations)
  • User annotations/notes

Confirm book title and author. If unclear, ask.

2. Interview

Ask all four questions at once:

Before I evaluate your notes, I'd like to hear your recall. Answer as briefly or thoroughly as you like:

1. What was the core argument of the book? What were the key supporting points?
2. What did you find insightful?
3. What were you sceptical about?
4. How can you apply what you learned?

Wait for the user's full response before proceeding.

3. Evaluate

After receiving answers, assess against three sources:

A. Their highlights

  • Identify themes they highlighted but didn't mention
  • Note bookmarked chapters they didn't discuss
  • Check if stated "core argument" aligns with what they actually marked

B. Factual accuracy

  • Check for inaccuracies in their recall
  • Correct gently with evidence from their own highlights or reliable sources

C. External themes

  • Search reliable sources (publisher descriptions, reputable reviews, author interviews) for major themes
  • If they missed a commonly-discussed theme, flag it: "Critics often highlight X as a central theme — your highlights touch on this in Chapter Y, but you didn't mention it. Worth revisiting?"

Tone: Gentle nudges. The user decides what matters to them.

Examples:

  • "You highlighted several passages about X but didn't mention it — anything there?"
  • "Your heaviest annotations were in the section on Y. Deliberately omitted, or slipped your mind?"
  • "One factual note: the book states Z happened in 1974, not 1971."
  • "Reviewers frequently cite the author's argument about W as central. You bookmarked that chapter but didn't mention it — intentional?"

Power stays with the user. After flagging gaps, ask: "Any of these worth adding to your summary, or are you happy with your current framing?"

4. Output

Create a one-page summary combining:

  • Core thesis — from user's response, supplemented by highlights
  • Key insights — what they found valuable + overlooked highlights
  • Critiques — their stated skepticism
  • Applications — how they'll use it
  • Gaps noted — brief mention of themes they chose to omit (optional, user decides)

5. Export

Offer:

  1. Markdown file
  2. Word document (.docx) for Google Drive — follow /mnt/skills/public/docx/SKILL.md

Examples

Correct Usage

User: "I just finished Chokepoints by Edward Fishman. Here are my highlights: [JSON link]. Can you help me create a book review?"

Claude: Parses JSON. Confirms title/author. Asks the four questions. Waits.

User: Responds to all four.

Claude: Evaluates recall against highlights. Notes: "You highlighted extensively on the petrodollar mechanics but didn't mention it — worth including?" Checks reliable sources, flags one theme the user missed. Asks if they want to incorporate any gaps. Produces one-page summary in user's writing style. Offers export.


Incorrect Usage

ScenarioWhy it's wrong
Producing a summary immediately without asking the four questionsSkips the recall step; defeats the purpose
Writing in generic "book report" styleShould match user's voice and style
Overwhelming user with every gap foundUser decides what matters; flag, don't lecture
Using promotional language ("must-read," "brilliant")Analytical, not promotional
Searching extensively to pad the summarySearch only to verify facts or flag missed themes
Triggering for fiction or books without user's reading dataSkill assumes non-fiction + user annotations

Principles

  1. User's voice first. Match their writing style. The summary reflects their experience, not a generic overview.

  2. Recall before assistance. Always interview before evaluating. The act of articulation is the learning.

  3. Highlights reveal attention. What someone marks shows what resonated. Use this as ground truth for evaluating recall.

  4. Gaps are suggestions, not corrections. Flag what they missed. They decide what to keep.

  5. No promotional language. Write clean, analytical prose. No "definitive," "essential," "groundbreaking."

  6. Brevity over completeness. One page means tradeoffs. Prioritize what the user cared about.


Format Handling

FormatParsing approach
JSONParse highlights, bookmarks, notes arrays
HTMLExtract highlighted text, annotations
CSVMap columns to quotes/notes/chapters
ExportParse clippings format
Plain textTreat as raw highlights; ask for clarification if ambiguous

If format is unclear, ask the user to clarify structure.

Source

git clone https://github.com/swathidbhat/Claude-Skills/blob/main/non-fiction-book-review/SKILL.mdView on GitHub

Overview

This skill helps you capture a book's impact through your own highlights and reflections. You provide highlights, annotations, or reading data, then undergo a four-question recall interview before your notes are evaluated against your highlights for accuracy and missed themes. The result is a concise, one-page summary written in your voice.

How This Skill Works

1) Intake: parse user reading data (highlights, bookmarks, notes) and confirm the book title and author. 2) Interview: present all four reflection questions at once and let you respond. 3) Evaluate: compare recall to highlights, check factual accuracy, and surface external themes; flag gaps. 4) Output: generate a one-page summary in the user’s voice.

When to Use It

  • You want a book review, summary, or notes and you have your highlights or notes in a data format (JSON, HTML, CSV, export, copy-paste).
  • You want to test your recall against your own notes and surface themes you might have missed.
  • You need a concise, one-page summary written in your own voice rather than a generic synopsis.
  • You’re reviewing a non-fiction work (not fiction) and provide structured reading data with highlights/annotations.
  • You prefer a guided, gentle evaluation that centers your perspective before refining the summary.

Quick Start

  1. Step 1: Provide your reading data (highlights, annotations, bookmarks) and confirm the book title and author.
  2. Step 2: Respond to the four recall questions; the system will evaluate your recall against your notes.
  3. Step 3: Review the generated one-page summary in your voice and adjust as needed.

Best Practices

  • Provide clear reading data: include highlights/quotes, bookmarks by chapter, and your annotations.
  • Answer all four recall questions clearly before proceeding to evaluation.
  • Verify the book title and author to ensure accurate context for your summary.
  • Be explicit about your context and goals so the summary aligns with your needs.
  • Review the generated one-page summary and flag any remaining gaps or misalignments.

Example Use Cases

  • A reader uploads JSON highlights for a non-fiction book on economics, answers the four recall questions, and receives a tailored, one-page summary in their own voice.
  • A student provides notes and bookmarks from a climate policy book, then reviews the AI-assisted evaluation for missed themes before finalizing their summary.
  • A professional shares annotations from a leadership book in CSV form, triggers the recall interview, and obtains a succinct, voice-matched summary.
  • A reader pastes excerpts from a neuroscience text, uses the four-question interview to reflect, and gets a summary that highlights the user's practical takeaways.
  • A user seeks to check factual accuracy against their notes and uses the tool to surface external themes with gentle nudges and optional corrections.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers