RAG chunking is the process of splitting source documents into smaller, retrievable units (“chunks”) before indexing them for retrieval-augmented generation (RAG). The chunking strategy—size, overlap, and boundary rules—directly affects retrieval quality, context relevance, latency, and the likelihood that the model will produce grounded answers.
What is RAG Chunking?
In a RAG pipeline, documents are ingested, transformed into chunks, embedded, and stored in a retrieval index (often a vector database). At query time, the system retrieves the top-k chunks and supplies them to the LLM as context. Chunking is therefore a core design choice: if chunks are too large, retrieval may return irrelevant text and waste context window tokens; if chunks are too small, important information may be fragmented across multiple chunks and not retrieved together.
Chunking can be done by fixed token/character windows, by document structure (headings, paragraphs, tables), or by semantic segmentation (splitting where topics change). Many systems use overlap (e.g., 10–20%) to preserve continuity across boundaries, and attach metadata (section titles, page numbers, source URLs) so retrieved chunks can be traced and filtered.
Where RAG Chunking is used (and why it matters)
RAG chunking is used in enterprise search assistants, knowledge-base chatbots, coding/documentation copilots, and research agents. It matters because it influences key metrics: answer faithfulness, citation accuracy, and retrieval precision/recall. Poor chunking can increase hallucinations (the model lacks the right evidence), raise costs (more tokens per chunk), and degrade user trust (answers cite the wrong passages).
Chunking also interacts with embedding models and rerankers: different models may perform better with different chunk lengths and boundary rules, so chunking is often tuned with offline evaluation.
Types
- Fixed-size chunking: split by tokens/characters with optional overlap.
- Structure-aware chunking: split by headings, paragraphs, or HTML blocks.
- Semantic chunking: split based on topic shifts or embedding similarity.
- Hybrid chunking: structure-first, then token-based normalization.
FAQs
What is a good chunk size for RAG?
There is no universal size; common ranges are a few hundred to ~1,000 tokens. The best choice depends on document style, embedding model, and context window.
Why use chunk overlap?
Overlap reduces boundary loss—important sentences that straddle a split remain retrievable, improving coherence and grounding.
How do you evaluate chunking strategies?
Use retrieval and generation benchmarks: hit-rate for relevant passages, reranker performance, groundedness checks, and human review of citations and answer correctness.