Get the FREE Ultimate OpenClaw Setup Guide →

rlm-chunking

Scanned
npx machina-cli add skill zircote/rlm-rs-plugin/rlm-chunking --openclaw
Files (1)
SKILL.md
5.1 KB

RLM Chunking Strategy Guide

Select and configure the optimal chunking strategy for documents processed through the RLM workflow. Different content types require different approaches to maintain semantic coherence while respecting context limits.

Available Strategies

Fixed Chunking

Split content at exact byte boundaries with optional overlap.

Best for:

  • Unstructured text (logs, raw output)
  • Content without clear section markers
  • Maximum control over chunk sizes

Configuration:

rlm-rs load <file> --chunker fixed --chunk-size 6000 --overlap 1000

Parameters:

  • --chunk-size: Characters per chunk (default: 6000, max: 50000)
  • --overlap: Characters shared between adjacent chunks (default: 0)

Trade-offs:

  • Pro: Predictable chunk sizes, simple to reason about
  • Con: May split mid-sentence or mid-concept

Semantic Chunking

Split at natural boundaries (headings, paragraph breaks, code blocks).

Best for:

  • Markdown documents
  • Source code files
  • Structured data (JSON, XML, YAML)
  • Documentation with clear sections

Configuration:

rlm-rs load <file> --chunker semantic --chunk-size 6000

Behavior:

  • Respects heading hierarchy (h1 > h2 > h3)
  • Keeps code blocks together when possible
  • Preserves paragraph boundaries
  • Falls back to sentence boundaries for dense text

Trade-offs:

  • Pro: Maintains semantic coherence
  • Con: Chunk sizes may vary significantly

Parallel Chunking

Multi-threaded chunking for very large files.

Best for:

  • Files > 10MB
  • When processing time is critical
  • Bulk loading multiple files

Configuration:

rlm-rs load <file> --chunker parallel --chunk-size 100000

Trade-offs:

  • Pro: 2-4x faster for large files
  • Con: Same limitations as fixed chunking

Selection Guide

Content TypeStrategyChunk SizeOverlap
Markdown docssemantic60000
Source codesemantic60000
JSON/XMLsemantic60000
Plain textfixed6000500
Log filesfixed60001000
Mixed contentsemantic60000
Very large filesparallel60000

Chunk Size Guidelines

Context Window Considerations

The default chunk size of 6000 characters (~1500 tokens) is optimized for semantic search quality. Larger chunks (up to 50000 max) can be specified for fewer total chunks:

  • Default: 6000 chars (~1500 tokens) - best for semantic search
  • Maximum: 50000 chars (~12,500 tokens) - for fewer chunks

Adjusting Size

Increase chunk size (up to 50000) when:

  • Content has many small sections
  • Need fewer total chunks
  • Sections are tightly interrelated

Decrease chunk size when:

  • Sub-LLM responses are getting truncated
  • Need more granular analysis
  • Content is dense with important details

Overlap Configuration

Overlap ensures context continuity between chunks.

When to use overlap:

  • Content has flowing narrative
  • Important context may span boundaries
  • Searching for patterns that cross chunk boundaries

Typical values:

  • 0: Structured content with clear boundaries
  • 500-1000: Narrative text, logs
  • 2000+: Dense technical content where context matters

Verification Commands

After loading, verify chunking results:

# Show buffer details including chunk count
rlm-rs show <buffer_name> --chunks

# View chunk boundaries
rlm-rs chunk-indices <buffer_name>

# Preview first chunk content
rlm-rs peek <buffer_name> --start 0 --end 3000

Common Patterns

Log Analysis

rlm-rs load server.log --name logs --chunker fixed --chunk-size 6000 --overlap 500

Documentation Processing

rlm-rs load docs.md --name docs --chunker semantic

Codebase Analysis

# Concatenate multiple files first, then load
cat src/*.rs > combined.rs
rlm-rs load combined.rs --name code --chunker semantic --chunk-size 6000

Large Dataset

rlm-rs load dataset.jsonl --name data --chunker parallel --chunk-size 6000

Semantic Search Considerations

Chunking strategy affects search quality:

  • Semantic chunking works best with semantic search because chunks align with conceptual boundaries
  • Fixed chunking with overlap helps ensure search queries match content that might span chunk boundaries
  • Embeddings are generated automatically on first search - no manual step required

Troubleshooting

Chunks too small: Increase --chunk-size or switch to semantic chunking.

Important content split: Add overlap or switch to semantic chunking.

Processing too slow: Use parallel chunking for files > 5MB.

Sub-LLM truncating responses: Decrease chunk size to allow more output space.

Search missing relevant content: Try increasing overlap or switching to semantic chunking.

Source

git clone https://github.com/zircote/rlm-rs-plugin/blob/main/skills/rlm-chunking/SKILL.mdView on GitHub

Overview

This guide helps you select and configure the optimal chunking strategy for documents in the RLM workflow. It covers fixed, semantic, and parallel chunkers, plus chunk size, overlap, trade-offs, and verification steps to balance semantic coherence and performance.

How This Skill Works

RLM uses a chosen chunker (fixed, semantic, or parallel) via rlm-rs to split input into chunks based on chunk-size and overlap. Fixed chunking breaks at exact byte boundaries; semantic chunking preserves headings, paragraphs, and code blocks; parallel chunking speeds up processing for very large files with multi-threading. After loading, you can verify results with commands to inspect chunk counts, boundaries, and sample content.

When to Use It

  • Working with unstructured logs or plain text where predictable chunk sizes matter (Fixed chunking).
  • Processing Markdown docs, source code, or JSON/XML where semantic boundaries matter (Semantic chunking).
  • Handling very large files and needing faster processing (Parallel chunking).
  • You need to balance fewer chunks with coherent cross-boundary context (adjust chunk-size and overlap).
  • You want to verify chunk boundaries and contents after loading (use verification commands).

Quick Start

  1. Step 1: Assess content type and choose a chunker (e.g., semantic for structured docs).
  2. Step 2: Run the load command with an appropriate chunk-size, e.g.: rlm-rs load <file> --chunker semantic --chunk-size 6000
  3. Step 3: Verify results using: rlm-rs show <buffer_name> --chunks; rlm-rs chunk-indices <buffer_name>; rlm-rs peek <buffer_name> --start 0 --end 3000

Best Practices

  • Match the strategy to content type: semantic for structured content, fixed for unstructured text.
  • Start with the default 6000-character chunk size; adjust up to 50000 for fewer chunks.
  • Use overlap thoughtfully: 0 for clear boundaries; 500–1000 for narrative or logs; 2000+ for dense technical content.
  • Verify results with rlm-rs show, rlm-rs chunk-indices, and rlm-rs peek to confirm boundaries and samples.
  • Test multiple samples from your data to optimize the trade-off between chunk count and semantic coherence.

Example Use Cases

  • Chunking a Markdown API doc semantically to keep headings and code blocks intact.
  • Chunking a JSON/XML data dump semantically to preserve structural boundaries.
  • Chunking server logs with fixed chunking (6000 chars) and 1000-char overlap for context.
  • Chunking a 50MB data file in parallel (--chunk-size 100000) to speed up processing.
  • Chunking mixed content (markdown + JSON) semantically to maintain coherence across types.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers