Skip to content
Mobrief
Mobrief
Back to archive

Infra & Chips · Vercel Blog

Build knowledge agents without embeddings

Most knowledge agents start the same way.

Apr 11, 2026 03:01 UTC · ~2 min read · Research
Research

Check methodology

Read original
  • May affect how AI can be used.
  • You pick a vector database, then build a chunking pipeline.
  • You choose an embedding model, then tune retrieval parameters.Weeks later, your agent answers a question incorrectly, and you have no idea which chunk it retrieved or why that chunk scored…

Context

You pick a vector database, then build a chunking pipeline. You choose an embedding model, then tune retrieval parameters.Weeks later, your agent answers a question incorrectly, and you have no idea which chunk it retrieved or why that chunk scored highest.Vercel Blog kept seeing this pattern internally and for teams building agents on Vercel. The embedding stack works for semantic similarity, but it falls short when you need a specific value from structured data. The failure mode is silent: the agent confidently returns the wrong chunk, and you can't trace the path from question to answer.That's why Vercel Blog tried something different. Vercel Blog replaced Vercel Blog's vector pipeline with a filesystem and gave the agent bash. Vercel Blog's sales call summarization agent went from ~$1.00 to ~$0.25 per call, and the output quality improved. The agent was doing what it already knew how to do: read files, run grep, and navigate directories.So Vercel Blog open-sourced the Knowledge Agent Template, a production-ready version of this architecture built on Vercel.What the template doesThe Knowledge Agent Template is an open source, file-system-based agent you can fork, customize,…

For builders

You pick a vector database, then build a chunking pipeline.

You pick a vector database, then build a chunking pipeline.

O Original S Save / Search