Gemma 4 on Apple Silicon: All Four Models Compared, Benchmarked, and Running Locally

Google just dropped a 31-billion-parameter model that sits on the same leaderboard row as Claude Sonnet 4.5 and outranks models 20 times its size. That is not a typo. Gemma 4 31B, released under Apache 2.0 on 2 April 2026, is the densest punch-per-parameter open model the industry has ever seen, and you can run it on the MacBook you already own. If you have been paying $20-200 a month for API access to frontier models, this article is about to ruin your budget justification.

Gemma 4 31B dense model architecture visualisation, dark technical diagram
Gemma 4 31B – Google DeepMind’s flagship open model, now runnable on consumer hardware.

What Exactly Is Gemma 4 31B?

Gemma 4 is a family of open-weight models from Google DeepMind, built from the same research and technology that powers Gemini 3. The family ships in four sizes: E2B and E4B for phones and edge devices, a 26B Mixture-of-Experts (MoE) variant, and the 31B dense flagship. This article focuses on the 31B dense model, which is the largest, highest-quality member of the family.

The 31B is a dense transformer, meaning all 30.7 billion parameters fire on every single token. There is no routing, no gating, no “some experts sleep whilst others work.” Every weight participates in every inference step. That architectural simplicity buys you two things: predictable behaviour and maximum quality per parameter.

Here are the core specifications, straight from the official model card:

Property Gemma 4 31B Dense
Total Parameters 30.7B
Active Parameters 30.7B (all of them, every token)
Layers 60
Context Window 256K tokens
Sliding Window 1,024 tokens
Vocabulary Size 262K
Vision Encoder ~550M parameters (27-layer ViT with 2D RoPE)
Audio Not supported (E2B/E4B only)
Licence Apache 2.0
Input Modalities Text + Images (variable resolution)

The architecture uses a hybrid attention mechanism that interleaves local sliding-window attention with full global attention, ensuring the final layer is always global. Global layers use unified Keys and Values with Proportional RoPE (p-RoPE) to keep memory manageable at long context lengths. In plain English: the model can see its full 256K-token window without the memory cost exploding the way it would with naive full attention on every layer.

“Built from the same world-class research and technology as Gemini 3, Gemma 4 is the most capable model family you can run on your hardware.” – Google, Gemma 4 Launch Blog

Dense vs Sparse vs MoE: The Architecture That Matters

Understanding why Gemma 4 ships two different 20-30B models requires understanding three architectural paradigms that define how modern LLMs spend compute. This is the single most important concept for choosing which model to run locally, so let us get it right.

Dense Models: Every Neuron, Every Token

A dense transformer activates 100% of its parameters on every forward pass. If a model has 31 billion parameters, it performs 31 billion parameters’ worth of computation for every single token it generates. This is the classical architecture from “Attention Is All You Need” (Vaswani et al., 2017), and it remains the gold standard for raw quality. Dense models are simpler to train, more predictable in behaviour, and generally produce the highest-quality outputs at a given parameter count.

The downside is obvious: compute cost scales linearly with parameter count. Double the parameters, double the FLOPs per token. Gemma 4 31B is a dense model, and that is precisely why it tops the quality charts.

Mixture-of-Experts (MoE): Conditional Computation

MoE models replace certain feed-forward layers with multiple parallel “expert” sub-networks. A learned routing network examines each token and decides which experts handle it. Only a small subset of experts activate per token, so the total parameter count far exceeds the active parameter count.

Take Gemma 4’s 26B A4B variant as a concrete example:

Property 26B A4B MoE 31B Dense
Total Parameters 25.2B 30.7B
Active Parameters per Token 3.8B 30.7B
Expert Count 128 total, 8 active + 1 shared N/A (dense)
Layers 30 60
Arena AI Score 1,441 1,452
Inference Speed ~4B model speed ~31B model speed

The 26B MoE only activates 3.8 billion parameters per token. That means it computes at roughly the speed of a 4B dense model, despite having the “knowledge capacity” of a 25B model. The trade-off? Slightly lower peak quality and less predictable behaviour for fine-tuning, because the routing decisions add a stochastic element the dense model does not have.

Gemma 4’s MoE is architecturally unusual: each layer runs both a dense GeGLU FFN and a 128-expert MoE system in parallel, then sums the outputs. Most MoE architectures replace the FFN entirely. Gemma 4 keeps both, which partly explains why its MoE variant scores so close to the dense model despite activating far fewer parameters.

Sparse Models: The General Category

MoE is a specific type of sparse architecture, but “sparse” is the broader umbrella. Any model that selectively activates a subset of its parameters per token is sparse. The key insight, as described in Christopher Bishop’s Pattern Recognition and Machine Learning, is that not every feature in a learned representation is relevant to every input. Sparsity exploits this by routing computation only where it is needed.

Here is the practical cheat-sheet:

Architecture Compute per Token Memory Footprint Best For
Dense All parameters All parameters must fit Maximum quality, fine-tuning, predictable outputs
MoE (Sparse) Active subset only All parameters must still fit Fast inference, responsive chat, latency-critical agents
Quantised Dense All parameters (reduced precision) Reduced (e.g. 4-bit = ~4x smaller) Running dense models on constrained hardware

A critical nuance: MoE does not reduce memory requirements. All 25.2B parameters of the 26B MoE must be loaded into memory even though only 3.8B are active per token. The inactive experts are idle but still resident. MoE saves compute, not memory. This is why quantisation and MoE are complementary techniques, and why running the Q4-quantised 31B dense on a Mac with 24GB is actually a better deal than running the full-precision 26B MoE.

Dense vs MoE architecture comparison diagram, dark technical illustration
Dense models fire every neuron; MoE routes each token through a small subset of specialised experts.

The Benchmarks: Arena Rankings and Hard Numbers

Benchmarks are a minefield of cherry-picked numbers and suspiciously round percentages. So let us look at two sources: the Arena AI human-preference leaderboard and the automated benchmark suite from Google’s own model card.

Arena AI: Human Preference Rankings

As of 31 March 2026, the Arena AI text leaderboard has 337 models ranked from 5.7 million human votes. Here is where Gemma 4 lands in the overall table:

Model Organisation Licence Arena Score
Claude Opus 4.6 Thinking Anthropic Proprietary 1,504 +/- 6
Claude Opus 4.6 Anthropic Proprietary 1,499 +/- 5
Gemini 3.1 Pro Google Proprietary 1,494 +/- 5
Claude Sonnet 4.5 Thinking Anthropic Proprietary 1,452 +/- 3
Gemma 4 31B Google Apache 2.0 1,452 +/- 9
Qwen 3.5 397B A17B Alibaba Apache 2.0 1,449 +/- 6
Gemini 2.5 Pro Google Proprietary 1,448 +/- 3
Gemma 4 26B A4B Google Apache 2.0 1,441 +/- 9

Read that again. Gemma 4 31B scores 1,452, matching Claude Sonnet 4.5 Thinking and outranking Gemini 2.5 Pro and Qwen 3.5 397B. Among open-source models, it is ranked #3 in the world. This 31-billion-parameter model is competing with, and beating, models that are far larger. Google claims it “outperforms models up to 20 times larger,” and the Arena data backs that up.

Automated Benchmarks: The Full Picture

Here is a compact benchmark comparison from Google’s official model card:

Benchmark Gemma 4 31B Gemma 4 26B MoE Gemma 3 27B
MMLU Pro 85.2% 82.6% 67.6%
AIME 2026 89.2% 88.3% 20.8%
LiveCodeBench v6 80.0% 77.1% 29.1%
GPQA Diamond 84.3% 82.3% 42.4%
Codeforces ELO 2,150 1,718 110
MMMU Pro 76.9% 73.8% 49.7%
MMMLU 88.4% 86.3% 70.7%

The AIME 2026 jump is staggering: from 20.8% to 89.2%. The Codeforces ELO went from 110 to 2,150. This is not a small step over Gemma 3, it is a generational leap.

Running Gemma 4 31B on a Mac: The Practical Guide

This is where it gets exciting for anyone with an Apple Silicon Mac. The unified memory architecture on M-series chips is a genuine superpower for local LLM inference, because the GPU and CPU share the same RAM pool. No separate VRAM cliff. If you have 24GB, 36GB, or more of unified memory, you are in business.

Memory Requirements

Precision Approx. Size Minimum Memory Mac Recommendation
BF16 ~58 GB 64 GB+ M2/M3/M4 Max 64GB+
FP8 ~30 GB 36 GB+ M3/M4 Pro 36GB
Q4_K_M ~20 GB 24 GB+ M2/M3/M4 Pro 24GB
Q3 ~15 GB 18 GB+ Smaller Macs

The sweet spot for most Mac users is Q4_K_M quantisation at about 20GB. This is the default distribution on Ollama, and it fits comfortably on a 24GB Mac with some headroom left for the operating system.

Step 1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Or download the macOS app directly from ollama.com.

Step 2: Pull and Run the Model

ollama run gemma4:31b

That is it. Two commands total. The download is around 20GB, and then you are chatting with a model that matches Claude Sonnet 4.5 on the Arena leaderboard.

Expected Performance on Apple Silicon

Mac Configuration Quantisation Approx. Speed Notes
M4 Max 128GB Q4_K_M 40-50 tok/s Very fast local inference
M3/M4 Pro 36GB Q4_K_M 20-35 tok/s Comfortable for extended use
M2/M3 Pro 24GB Q4_K_M 15-25 tok/s Usable, context size matters
M1/M2 16GB Q3 8-15 tok/s Tight, consider 26B MoE or E4B

For reference, human reading speed is roughly 4-5 tokens per second. Even the slower configurations are still readable in real time.

MLX: The Apple Silicon Optimiser

If you want to squeeze more performance out of your Mac, look into MLX, Apple’s machine learning framework optimised specifically for Apple Silicon. Community support for Gemma 4 landed almost immediately, and MLX-optimised models can outperform GGUF-based inference on the same hardware.

pip install mlx-lm
mlx_lm.generate --model Phipper/gemma-4-31b-it-mlx-4bit --prompt "Hello, world"

The trade-off: MLX requires more manual setup than Ollama. For most users, Ollama is the right starting point. For performance enthusiasts, MLX is where things get fun.

Mac running local AI model, dark technical setup
Apple Silicon’s unified memory architecture makes Macs surprisingly capable local LLM machines.

The Complete Gemma 4 Family: Four Models, Four Use Cases

The 31B dense flagship is the headline act, but Google shipped three other models in the same family, and understanding the full lineup matters because the right model for you depends on what you have in your pocket, on your desk, or in your rack. Here is the entire family at a glance:

Model Architecture Effective Params Context Modalities Q4 Memory
E2B Dense (edge) 2.3B 128K Text, Image, Audio ~3.2 GB
E4B Dense (edge) 4.5B 128K Text, Image, Audio ~5 GB
26B A4B MoE (128 experts) 3.8B active 256K Text, Image ~15.6 GB
31B Dense 30.7B 256K Text, Image ~17.4 GB

Two things jump out immediately. First, the smaller models are the ones with audio support, not the flagship. The E2B and E4B each carry a dedicated ~300M-parameter audio encoder that the larger models lack. Second, the edge models use a technique called Per-Layer Embeddings (PLE), which gives each decoder layer its own small embedding table for every token. These tables are large but only used for lookups, which is why the “effective” parameter count is much smaller than the total on disk.

Gemma 4 E2B: The Phone Model

E2B has 5.1 billion total parameters but only 2.3 billion effective, and it fits in roughly 3.2 GB at Q4 quantisation. This is small enough to run on a three-year-old smartphone. Through the Google AI Edge Gallery app (available on both iOS and Android), you can download E2B at about 2.5 GB on disk and start chatting with it entirely offline.

The performance claim that shocked the community: E2B beats Gemma 3 27B on most benchmarks despite being roughly 12x smaller in effective parameters. One early tester running it on a basic i7 laptop with 32 GB RAM reported it was “not only faster, it gives significantly better answers” than Qwen 3.5 4B for finance analysis. On a phone, users are seeing roughly 30 tokens per second, which is genuinely conversational speed.

For a Mac with only 8 GB of unified memory, E2B at Q4 is the safe bet. It leaves plenty of headroom for macOS and whatever else you are running. Install it with:

ollama run gemma4:e2b

Gemma 4 E4B: The Best Small Model You Can Run Anywhere

E4B is the sweet spot for anyone who wants something meaningfully smarter than E2B without jumping to the heavyweight models. At 8 billion total parameters (4.5B effective) and ~5 GB at Q4, it fits comfortably on any Mac with 16 GB of memory and leaves room for a browser, an IDE, and Slack running simultaneously.

E4B is the model David Ondrej demonstrated running on his iPhone 16 Pro Max in the video, and it was clearly usable at conversational speeds. The Edge Gallery app lists it at 3.6 GB on disk. On a phone with a modern chip, expect 20-30 tokens per second. On a Mac with 16 GB, expect 40-60+ tokens per second since the model is small enough to stay entirely in the GPU memory partition.

Crucially, E4B supports native audio input alongside text, image, and video. That means on-device speech recognition, spoken language understanding, and audio analysis, all without sending a byte off your machine. The 31B flagship cannot do any of this.

ollama run gemma4:e4b

Gemma 4 26B A4B: The Speed Demon

The 26B MoE is the model for people who want high-end quality at dramatically lower latency. Despite having 25.2 billion total parameters, only 3.8 billion are active per token, which means it runs at roughly the speed of a 4B dense model whilst retaining the knowledge capacity of a 25B model.

Real-world benchmarks from Kartikey Chauhan’s testing on a 12 GB VRAM Nvidia card show 44.2 tokens per second for text at 128K context and 42.1 tok/s for vision at 64K context. Those are server-grade numbers from consumer hardware.

On a Mac with 16 GB of unified memory, the 26B A4B at Q4 (~15.6 GB) is technically possible but tight. You will be at the limit of available memory, and macOS itself needs headroom. A 24 GB Mac runs it comfortably. For 16 GB Macs, be conservative with context length and expect some performance degradation from memory pressure.

ollama run gemma4:26b

Quality vs Size: What You Actually Lose at Each Step Down

The perennial question with model families is: how much quality do you sacrifice for each size reduction? With Gemma 4, Google published enough benchmark data to answer this precisely. Here is the full family compared side by side:

Benchmark 31B Dense 26B MoE E4B E2B Gemma 3 27B
MMLU Pro 85.2% 82.6% 69.4% 60.0% 67.6%
AIME 2026 (Maths) 89.2% 88.3% 42.5% 37.5% 20.8%
LiveCodeBench v6 80.0% 77.1% 52.0% 44.0% 29.1%
GPQA Diamond 84.3% 82.3% 58.6% 43.4% 42.4%
MMMU Pro (Vision) 76.9% 73.8% 52.6% 44.2% 49.7%
MMMLU (Multilingual) 88.4% 86.3% 76.6% 67.4% 70.7%
Tau2 Agentic (avg over 3) 76.9% 68.2% 42.2% 24.5% 16.2%
Codeforces ELO 2,150 1,718 940 633 110

The pattern is clear: the 31B-to-26B step is almost free. You lose roughly 2-3 percentage points on most benchmarks but gain dramatically faster inference. This is the best trade-off in the entire lineup. The 26B MoE at 88.3% on AIME is essentially indistinguishable from the 31B’s 89.2% for any practical purpose.

The 26B-to-E4B step is where the cliff hits. You go from 88.3% to 42.5% on AIME, from 77.1% to 52.0% on LiveCodeBench, and from 85.5% to 57.5% on agentic tasks. This is where “frontier local model” becomes “capable assistant.” E4B is excellent for its size, but it is not in the same league as the two larger models for maths, competitive coding, or complex tool use.

The E4B-to-E2B step is gentler than expected. E2B typically loses 5-15 percentage points versus E4B, which is surprisingly modest given the 2x parameter difference. For basic Q&A, translation, summarisation, and conversational use, E2B is genuinely useful. It even beats Gemma 3 27B on multilingual tasks (67.4% vs 70.7% is close, but E2B’s AIME score of 37.5% vs Gemma 3’s 20.8% is a clear win).

Perhaps the most striking trend in the table: E2B scores 24.5% on Tau2 agentic tasks versus Gemma 3 27B’s 16.2%. A model you can run on a phone outperforms last year’s full-size model at tool use by a clear margin. Meanwhile, the 31B’s 76.9% average across all three Tau2 domains is nearly 5x what Gemma 3 managed. That is not an incremental improvement; it is proof that architectural progress matters more than raw scale.

Running Every Gemma 4 Model: A Hardware Decision Tree

Here is the practical guide to matching your hardware to the right model. Start from whatever you own and work your way to the best model it can handle:

Your Hardware Best Gemma 4 Model Quantisation Expected Speed Quality Tier
iPhone / Android (3+ years old) E2B INT4 ~30 tok/s Good assistant, basic coding
iPhone / Android (recent) E4B INT4 20-30 tok/s Strong assistant, decent coding
Mac M1/M2 8GB E2B or E4B Q4 50-80 tok/s Good assistant with audio
Mac M1/M2/M3 16GB E4B (safe) or 26B A4B (tight) Q4 40-60 / 15-25 tok/s Strong / Near-frontier
Mac M2/M3/M4 Pro 24GB 26B A4B or 31B Q4 25-40 / 15-25 tok/s Near-frontier / Frontier
Mac M3/M4 Pro 36GB 31B Q4 or Q8 20-35 tok/s Frontier
Mac M3/M4 Max 64GB+ 31B BF16 40-50 tok/s Frontier, full precision
Nvidia GPU 12GB VRAM 26B A4B Q5 ~44 tok/s Near-frontier

The 12 GB Nvidia GPU result deserves special mention. Kartikey Chauhan’s detailed benchmarking of the 26B A4B on a 12 GB card using llama.cpp showed 44.2 tokens per second for text and 42.1 tok/s for vision, both at 128K context. He reported that the model is “an excellent default” for daily interactive use, with stable generation and no constant OOM babysitting once the right memory profile is set. The key was using fit-based GPU placement rather than forcing everything into VRAM.

For the edge models on phones, the Google AI Edge Gallery app is genuinely the easiest path. Download it, pick E2B or E4B, wait for the 2.5-3.6 GB download, and start chatting. Everything runs offline, nothing leaves your device, and the models support function calling for agentic tasks directly on the phone.

The 16 GB Mac Dilemma

The most common question in the community: “I have a MacBook with 16 GB, can I run the good stuff?” The honest answer is nuanced:

  • E4B at Q4 (~5 GB): Runs beautifully. Fast, responsive, with plenty of headroom. This is the comfortable choice.
  • 26B A4B at Q4 (~15.6 GB): Technically fits but leaves almost no room for macOS and apps. Expect memory pressure, swap usage, and slower generation as context grows. Usable for short conversations; painful for long ones.
  • 31B at Q4 (~17.4 GB): Does not fit. You will hit swap immediately, and inference will crawl.

If you have a 16 GB Mac and want the best possible quality, the 26B A4B is your ceiling, but keep context short and close other apps. If you want a smooth, reliable experience, E4B is the pragmatic winner. It scores 52% on LiveCodeBench (enough for practical coding help), 58.6% on GPQA Diamond (solid science reasoning), and it can process audio natively, which neither of the larger models can.

How Good Are These Models for Coding?

If you are a developer considering local models as a coding assistant, the benchmark numbers matter less than a straight answer: can this thing actually help me write code? Here is the honest breakdown for each model, using LiveCodeBench v6 (real coding tasks, not just function completion) and Codeforces ELO (competitive problem solving) as the primary yardsticks:

Model LiveCodeBench v6 Codeforces ELO Comparable To Practical Coding Level
E2B 44.0% 633 GPT-3.5-class Handles boilerplate, simple functions, basic refactors. Struggles with multi-file logic or complex algorithms.
E4B 52.0% 940 GPT-4o-mini / Claude 3.5 Haiku Writes working functions, understands context, handles standard patterns. The level that powers most “free tier” coding assistants.
26B A4B 77.1% 1,718 GPT-4o / Claude 3.5 Sonnet Strong coder. Handles multi-step problems, debugging, architectural reasoning, and non-trivial algorithms reliably.
31B 80.0% 2,150 Claude Sonnet 4.5 Frontier-class. Solves most competitive programming problems and writes production-quality code with real architectural awareness.

The Codeforces 1,718 ELO for the 26B MoE puts it at roughly “Candidate Master” level, meaning it can solve the majority of interview-style programming problems and a solid chunk of competitive challenges. The 31B at 2,150 ELO is in “Master” territory. For context, Gemma 3 27B scored 110 ELO on the same benchmark. That is not a typo.

The practical takeaway: if you have the memory for the 26B A4B or 31B, you have a genuinely capable local coding assistant that rivals the paid API models most developers use today. If you are limited to E4B, you still get a useful companion for everyday development, roughly on par with the models that power free-tier tools like GitHub Copilot’s lighter backend. E2B is better suited for quick scripting help, code explanation, and boilerplate generation than for serious algorithmic work.

A Suggested Workflow for Constrained Hardware

If your Mac cannot comfortably run the 26B or 31B, a practical approach is to run E4B as your always-on local model for inline help, autocomplete, and quick questions, then fall back to a cloud API (Claude, GPT-4o, or Gemma 4 31B via Google AI Studio, which offers a free tier) for the 20% of problems where E4B is not enough. You get speed and privacy for the easy stuff, and quality for the hard stuff.

CPU-Only Servers: Running Gemma 4 Without a GPU

Not everyone runs inference on a laptop or a gaming PC. If you have access to a rack server, a cloud VM, or any x86 machine with a lot of RAM but no GPU, Gemma 4 still works. The entire family runs on CPU-only hardware via llama.cpp, Ollama, or vLLM.

The key constraint on CPU-only inference is memory bandwidth, not compute. LLM token generation is fundamentally a memory-bound operation: the model reads weights from RAM for every token. A typical DDR4 server delivers 40-80 GB/s of memory bandwidth, versus 200-400 GB/s on Apple Silicon or 900+ GB/s on an Nvidia A100. Those extra CPU cores help with prompt ingestion (prefill) but barely move the needle on generation speed.

Here is what to expect on a typical high-core-count x86 server with DDR4 (e.g., a dual-socket Xeon or EPYC with 256-384 GB RAM):

Model Precision RAM Used Est. Generation Speed Best Use Case
E2B BF16 ~10 GB 15-30 tok/s High-throughput batch processing, multi-worker serving
E4B BF16 ~16 GB 10-20 tok/s Quality-per-watt sweet spot for CPU serving
26B A4B BF16 ~50 GB 8-15 tok/s Near-frontier quality, MoE helps since less data moves per token
31B BF16 ~58 GB 3-8 tok/s Maximum quality when latency is not critical

With 384 GB of RAM, you can run the 31B at full BF16 precision with no quantisation loss at all. Most consumer setups cannot do this. The trade-off is generation speed: expect 3-8 tokens per second for the 31B on DDR4, which is below human reading speed (~4-5 tok/s) but still usable for batch jobs, API backends, or any workflow where you do not need instant responses.

The 26B MoE is the star on CPU-only servers. Because only 3.8B parameters are active per token, it moves far less data through the memory bus than the 31B dense model, which means the memory-bandwidth bottleneck hurts less. Expect 8-15 tok/s at full precision, which is genuinely conversational speed, with quality only 2-3% behind the flagship.

For serving multiple concurrent users, consider running several E4B instances across those 56 cores rather than one large model. Each instance uses ~16 GB at BF16, so you could run 10+ parallel workers within 384 GB of RAM, giving you high aggregate throughput for an internal team.

Multimodal Capabilities: What It Can and Cannot See

Gemma 4 31B is multimodal for vision, accepting both text and images as input with text output. It includes a ~550M-parameter vision encoder and supports variable aspect ratios and resolutions.

  • Object detection and description – identify and describe objects in images
  • Document and PDF parsing – extract structure and text
  • OCR – including multilingual OCR
  • Chart comprehension – read graphs and visual data
  • Screen and UI understanding – parse app screenshots and interfaces
  • Video understanding – analyse sequences of frames

On MMMU Pro, Gemma 4 31B scores 76.9%, up from Gemma 3’s 49.7%. That is a serious jump in multimodal quality.

What it cannot do: the 31B model does not support audio input. Audio is only available on E2B and E4B. So if you need speech recognition or spoken language understanding, the small models are actually more capable in that modality than the flagship.

140+ Language Support

Gemma 4 is trained on over 140 languages, with out-of-the-box support for 35+ languages. Community testing suggests it is especially strong on multilingual tasks, and the official MMMLU score of 88.4% backs that up.

“Natively trained on over 140 languages, Gemma 4 helps developers build inclusive, high-performance applications for a global audience.” – Google AI for Developers, Gemma 4 Model Overview

This multilingual strength is one of Gemma 4’s real differentiators. If you build products for non-English audiences, this is not a side feature, it is the feature.

Choosing the Right Model: A Practical Decision Guide

With four models in the family, the question is no longer “should I run Gemma 4?” but “which Gemma 4?” Here is the decision matrix:

  • You have 24 GB+ and want the absolute best quality: Run the 31B dense. It is the quality ceiling of the family.
  • You have 24 GB+ but care about speed: Run the 26B A4B MoE. You lose 2-3% on benchmarks but gain roughly 2-4x faster inference. For most real tasks, you will not notice the quality difference.
  • You have a 16 GB Mac: The E4B is your best realistic option. The 26B A4B technically fits at Q4 but will struggle with memory pressure. E4B leaves comfortable headroom and still scores above Gemma 3 27B on key benchmarks.
  • You have an 8 GB Mac or a phone: Run E2B. At ~3.2 GB it fits anywhere, and it still beats Gemma 3 27B on maths and coding benchmarks despite being 12x smaller.
  • You need audio processing: Only E2B and E4B support native audio input. The 31B and 26B cannot hear anything.
  • You want to run AI entirely offline on your phone: Install the Google AI Edge Gallery app and pick E2B (2.5 GB) or E4B (3.6 GB). Everything runs locally, no data leaves your device.
  • You need the longest possible context: Only the 31B and 26B support 256K tokens. The edge models cap at 128K.
  • You want the absolute fastest time-to-first-token: E2B is the speed king, though E4B is close behind.

What to Check Right Now

  • Check your Mac’s unified memory (Apple menu, About This Mac). Match it to the hardware decision tree above to find your optimal model.
  • Install Ollama and try the model that fits your hardware:
    • ollama run gemma4:e2b – any Mac, any phone (3.2 GB)
    • ollama run gemma4:e4b – 8 GB+ Macs (5 GB)
    • ollama run gemma4:26b – 16 GB+ Macs, tight fit (15.6 GB)
    • ollama run gemma4:31b – 24 GB+ Macs (17.4 GB)
  • Try the Edge Gallery on your phone. Download the Google AI Edge Gallery (iOS and Android), grab E2B or E4B, and chat completely offline.
  • Compare against your paid model. Try your real prompts, not toy benchmarks. The 31B matches Claude Sonnet 4.5 on Arena; the E2B beats Gemma 3 27B on maths. Test them yourself.
  • Test the 26B MoE if you have the RAM. It is the best speed-to-quality ratio in the family: 44 tok/s on a 12 GB Nvidia card, and only 2-3% behind the 31B on benchmarks.
  • Watch for better quantisations and QAT releases. Unsloth, MLX Community, and other groups are actively improving the quantised variants. Quality improvements are still landing.
  • Take the Apache 2.0 licence seriously. Commercial use, modification, redistribution, and fine-tuning are all on the table for every model in the family.
AI model benchmarks and leaderboard data visualisation, dark analytical display
Gemma 4 31B sits among much larger frontier models on Arena AI, at a fraction of the size and cost.

Video Attribution


This article was inspired by David Ondrej’s video covering the Gemma 4 release. The analysis, benchmarks, architecture deep-dive, and Mac deployment guide are original research drawing from Google DeepMind’s official documentation, the Arena AI leaderboard, community testing, and the Hugging Face model card.

nJoy 😉

Context Graphs: The Knowledge Layer Your RAG Pipeline Is Missing (Or Does Not Need)

Your RAG pipeline is lying to you. Not maliciously, of course, but with the quiet confidence of a student who memorised the textbook’s index but never read a chapter. You feed it documents, it chunks them, embeds them, and when you ask a question it retrieves whichever chunks look “sort of similar” and hopes the LLM can stitch together a coherent answer. Sometimes it works. Sometimes it tells you Tokyo has 36 million people because it averaged two contradictory chunks. And you have no way to know which answer is real, because Vector RAG has no concept of “real”. It only knows “similar”. Context graphs are what happens when you decide similarity is not enough, and you want your AI to actually understand the relationships between things. TrustGraph just shipped a demo that shows exactly what that looks like in practice, and it is worth paying attention to.

Dark abstract visualization of interconnected knowledge graph nodes with cyan and amber connections on black background
Context graphs: where every node knows its neighbours and can prove where it got its information.

What Context Graphs Actually Are (and Why They Are Not Just Knowledge Graphs With a Rebrand)

A context graph is a knowledge graph that has been specifically engineered for consumption by AI models. That sentence sounds like marketing, so let us unpack it. A traditional knowledge graph stores millions of entities and relationships, optimised for human querying and data warehousing. Brilliant for analysts running SPARQL queries. Terrible for an LLM with a context window that starts forgetting things after a few thousand tokens.

Context graphs solve this by dynamically extracting focused subgraphs based on query relevance. Instead of dumping the entire graph into the prompt, you extract only the entities and relationships that matter for this specific question, scored by relevance, annotated with provenance, and formatted to minimise token waste. TrustGraph’s own documentation claims a 70% token reduction in their structured-versus-prose comparison. That number is plausible for the specific example they show (a simple entity lookup), but it is a vendor benchmark, not an independent evaluation, and the savings will vary dramatically depending on query complexity, graph density, and how much context the LLM actually needs.

“Context graphs are knowledge graphs specifically engineered and optimized for consumption by AI models. They extend traditional knowledge graphs by incorporating AI-specific optimizations like token efficiency, relevance ranking, provenance tracking, and hallucination reduction.” — TrustGraph, Context Graphs Guide

Think of the distinction this way. A knowledge graph is your entire library. A context graph is the specific stack of books your librarian pulls when you ask a particular question, each one bookmarked at the relevant page, with a note explaining why it was selected. The librarian remembers which shelf each book came from, when it was last updated, and how confident she is that the information is still correct. That is what provenance tracking and relevance scoring give you.

Here is the structural difference in compact form:

// Traditional knowledge graph: everything, all at once
{
  entities: [/* millions */],
  relationships: [/* tens of millions */]
}

// Context graph: query-specific, AI-optimised
{
  query: "Who leads TechCorp?",
  entities: [
    { name: "Alice Johnson", role: "CEO", relevance: 0.95 },
    { name: "TechCorp", industry: "Enterprise Software", relevance: 0.92 }
  ],
  relationships: [
    { from: "Alice Johnson", to: "TechCorp", type: "leads", relevance: 0.90 }
  ],
  metadata: { tokensUsed: 350, confidenceScore: 0.94, sources: ["hr_database"] }
}

The verbose natural-language equivalent of that context graph would cost 150 tokens. The structured version costs 45. Same information, a third of the price. As Martin Kleppmann writes in Designing Data-Intensive Applications, the way you structure your data determines what questions you can efficiently answer. Context graphs are structured specifically to answer LLM questions efficiently.

The TrustGraph Demo: London Pubs, Craft Beer, and Why Semantics Matter

The video “Context Graphs in Action” by TrustGraph co-founders Daniel Davis and Mark Adams is a 27-minute live demo. No slides. No marketing deck. They built a context graph from data about London pubs, restaurants, and event spaces, then demonstrated something deceptively simple that reveals the entire value proposition of this technology.

They asked two questions that any human would consider identical:

  1. “Where can I drink craft beer?”
  2. “Can you recommend a pub which serves craft beer?”

Both questions returned the same answer. But when they expanded the explainability trace, the paths through the graph were completely different. The first question, being open-ended, pulled in concepts from beer gardens, festivals, events, bars, cafes, and dozens of other venue types. The second question, with the word “pub” constraining the search, produced a far narrower traversal. The grounding concepts were different. The subgraph was different. The reasoning path was different. Only the final answer happened to converge.

This is the central insight the demo drives home: two questions that feel identical to a human are semantically distinct to a machine, and context graphs let you see exactly how and why. As Daniel puts it with characteristic bluntness: “If you ask a stupid question, you might get a stupid response.” The explainability trace lets you work backwards from a bad answer and determine whether the fault lay with the query, the data, or the retrieval path.

Dark diagram showing two query paths diverging through a knowledge graph, one broad and one narrow, cyan lines on black
Same answer, wildly different reasoning paths. The explainability trace reveals what Vector RAG hides.

What the Workbench Actually Shows

The demo walks through TrustGraph’s Workbench interface (accessible at localhost:8888 after deployment). Here is what they demonstrated:

  • Document ingestion: Plain text and PDF documents about London venues are uploaded through the Library page and processed through a GraphRAG flow. TrustGraph chunks the documents, extracts entities and relationships, generates vector embeddings, and builds the knowledge graph automatically.
  • Vector search entry points: Searching for “Bermondsey” returns semantically similar terms. Clicking a result reveals the fabric of the graph: Bermondsey tube station connects to the Jubilee line, which has a type “transport line”. You can navigate relationships in 3D space.
  • 3D graph visualisation: Interactive three-dimensional exploration of graph nodes and edges. Not intended for end users (Daniel jokes it would “send everybody over the edge insane”), but invaluable for understanding graph structure during development.
  • Explainability traces: Every query records a full reasoning trace. You can see: the original query, which concepts were extracted, which graph nodes matched, which edges were traversed, why each piece of evidence was selected (with the LLM’s reasoning), and the final synthesis. All traceable back to source documents.
  • Source provenance: Every fact in the graph links back to the specific document chunk it was extracted from. You can verify: where did this information come from? When was it ingested? Is it out of date? Do we trust this source?

The Ontology Question

Mark Adams demonstrates both approaches: schema-free extraction (GraphRAG) where the LLM discovers relationships freeform, and ontology-driven extraction (OntologyRAG) where a predefined schema forces precision. For the London venues demo, the ontology defines classes like “atmosphere” (cozy, creative, community spirit), “city”, “neighbourhood”, “event”, and constrains the relationships the graph will accept.

The result with ontologies is significantly more precise. Without an ontology, the LLM sometimes creates duplicate relationships with different names for the same concept. With an ontology, you control the vocabulary, and precision goes up. As Mark explains: “We force it into a much more precise structure.”

TrustGraph sits firmly in the RDF ecosystem rather than the property graph world (Neo4j and similar). The rationale: RDF supports reification (attaching metadata to edges themselves), multi-language representations, and the OWL/SKOS ontology standards natively. These features are essential for explainability and provenance tracking.

But let us be honest about the trade-offs. RDF comes with real costs. SPARQL is notoriously harder to learn than Cypher (Neo4j’s query language). OWL ontologies require domain experts to design and maintain, and they become a governance burden as your data evolves. Property graphs with Neo4j or Memgraph are simpler to reason about, faster for most traversal patterns, and have much larger developer ecosystems. TrustGraph’s choice of RDF is defensible for provenance-heavy enterprise use cases, but it is not the only valid architecture, and for many teams a property graph with LangGraph or LlamaIndex’s knowledge graph module will be simpler to operate and good enough.

The Broader Landscape: TrustGraph Did Not Invent This

Before we go further, some necessary context. The idea of using knowledge graphs to ground LLM responses is not new, and “context graph” is not a category that TrustGraph created from scratch. It is a refined evolution of work that has been shipping in production since late 2024.

Microsoft GraphRAG published the foundational “From Local to Global” paper in April 2024, introducing community-based summarisation of knowledge graphs for query-focused retrieval. Their approach extracts entities and relationships, clusters them into hierarchical communities using the Leiden algorithm, then pre-generates summaries at each level. It is open source, integrates with Neo4j, and has an Azure solution accelerator. Microsoft also shipped LazyGraphRAG (November 2024) to address the cost problem, and BenchmarkQED (June 2025) for automated RAG evaluation.

Neo4j + LangChain/LangGraph is arguably the most widely deployed graph RAG stack in production today. Neo4j’s property graph model with Cypher queries is simpler to learn than SPARQL, has a massive developer community, and integrates directly with LangChain’s retrieval chains. For teams already running Neo4j, adding graph-enhanced RAG requires no new infrastructure.

LlamaIndex Knowledge Graphs provides a Python-native graph RAG pipeline that works with Neo4j, Nebula Graph, and others. It handles entity extraction, graph construction, and hybrid vector+graph retrieval with significantly less operational complexity than a full RDF stack.

What TrustGraph adds to this landscape is specifically the combination of RDF-native ontology support, built-in explainability traces, portable context cores, and multi-model storage (Cassandra, Qdrant, etc.) in a single open-source platform. These are genuine differentiators for provenance-heavy enterprise use cases. But if you do not need ontology enforcement or full reasoning traces, the simpler alternatives above will get you 80% of the benefit at 20% of the operational complexity.

Where Vector RAG Falls Apart (and Context Graphs Save You)

Vector RAG seemed like the answer to everything when embeddings first became cheap. Embed your documents, find similar chunks, feed them to the LLM. Fast, simple, works for demos. Then you deploy it in production and discover the failure modes.

Case 1: The Averaging Problem

You embed two documents. One says “Tokyo’s population is 37.4 million.” The other says “Tokyo has about 35 million people.” Both are semantically similar to the query “What is Tokyo’s population?” The LLM sees both chunks and generates something in between. Maybe 36 million. Confidently wrong.

// Vector RAG retrieval for "What is Tokyo's population?"
chunk_1: "Tokyo's population is 37.4 million" (similarity: 0.94)
chunk_2: "Tokyo has about 35 million people" (similarity: 0.92)
// LLM output: "Tokyo has approximately 36 million people" -- wrong

// Context graph retrieval
node: Tokyo { population: 37400000, source: "UN World Population Prospects 2024",
              confidence: 1.0, lastVerified: "2024-07-01" }
// LLM output: "Tokyo's population is 37.4 million" -- correct, sourced, verifiable

A graph stores one value. The correct value. With a source and a timestamp. No ambiguity, no averaging, no hallucination.

Case 2: The Multi-Hop Blindness

Ask Vector RAG: “How does climate change affect AI research funding?” It needs to traverse: climate change affects government priorities, which influence research funding allocation, which supports AI research. Each of those facts lives in a different document. Vector RAG retrieves chunks that are individually similar to the question but cannot connect them into a reasoning chain.

// Vector RAG: retrieves 3 chunks that mention some of these concepts
// but cannot chain: climate -> govt priorities -> funding -> AI research
// Result: vague, hedge-filled answer

// GraphRAG: traverses the reasoning path
climate_change --[affects]--> government_priorities
government_priorities --[influences]--> research_funding
research_funding --[supports]--> ai_research
// Result: specific, grounded answer with full provenance chain

Independent benchmarks from Iterathon’s 2026 enterprise guide report GraphRAG achieving 83-87% accuracy on complex multi-hop queries versus Vector RAG’s 68-72%. Microsoft’s own evaluation found GraphRAG improved comprehensiveness by 26% and diversity by 57% over standard vector retrieval. These numbers are promising, but a caveat: most published benchmarks come from vendors or researchers with a stake in the outcome. Independent, apples-to-apples comparisons across Microsoft GraphRAG, Neo4j + LangChain, LlamaIndex, and TrustGraph on the same dataset remain conspicuously absent from the literature.

Case 3: The Lost-in-the-Middle Catastrophe

Here is the one that should worry every engineer relying on long context windows as a substitute for proper retrieval. Research by Liu et al. at Stanford demonstrated that LLMs consistently fail to use information placed in the middle of long contexts, even when the context window is enormous.

“Language models exhibit significantly degraded performance when relevant information is positioned in the middle of long contexts, even for models explicitly designed for long-context processing.” — Liu et al., “Lost in the Middle: How Language Models Use Long Contexts”, TACL 2024

TrustGraph’s own testing confirms this pattern holds across models. Chunks of 1,000 tokens extracted 2,153 graph edges. Chunks of 8,000 tokens extracted only 1,352. That is a 59% increase in extracted knowledge just from chunking smaller, using only 4% of the available context window. At 500 tokens, the system extracted 2,975 edges, a 120% improvement over 8,000-token chunks. This pattern held across eight models from six providers: Claude, Gemini, Mistral, Cohere, Llama, and others.

Long context windows do not work. Not because the models are bad, but because the transformer attention mechanism dilutes focus as token count rises. This appears to be inherent to the architecture itself. Context graphs sidestep the problem entirely: instead of cramming everything into a massive context, you extract a small, focused, structured subgraph. The LLM gets exactly what it needs and nothing else.

Dark chart showing declining knowledge extraction as chunk size increases, with cyan data points on black background
Bigger context windows, worse extraction. The lost-in-the-middle effect is real and it is not getting better.

How to Actually Deploy This: From Zero to Context Graph

TrustGraph is open source (Apache 2.0) and deploys via Docker Compose in minutes. Here is the real pipeline, not the marketing version:

Step 1: Configure and Deploy

# Install and configure TrustGraph
npx @trustgraph/config

# Interactive prompts:
# ? Select your LLM provider: Anthropic / OpenAI / Google / Mistral / Ollama
# ? Select deployment target: Docker / Kubernetes / Minikube
# Generates docker-compose.yaml and INSTALLATION.md

# Deploy
docker compose up -d

# Workbench available at http://localhost:8888
# Grafana monitoring at http://localhost:3000

Step 2: Ingest Documents and Build the Graph

# Create a collection
tg-set-collection \
  -n "Company Docs" \
  -d "Internal documentation" \
  company-docs

# Add a document
tg-add-library-document \
  --name "Security Policy 2025" \
  --id doc-security-2025 \
  --kind application/pdf \
  documents/security-policy.pdf

# Create a GraphRAG flow (no ontology needed)
tg-start-flow \
  -n graph-rag \
  -i security-graphrag \
  -d "Security document knowledge extraction"

# Process the document
tg-start-library-processing \
  --flow-id security-graphrag \
  --document-id doc-security-2025 \
  --collection company-docs

Step 3: Query With Explainability

# GraphRAG query with full provenance
tg-invoke-graph-rag \
  -f security-graphrag \
  -C company-docs \
  -q "What are our top cybersecurity vulnerabilities?"

# Or via the REST API
curl -X POST http://localhost:8001/api/invoke/graph-rag \
  -H "Content-Type: application/json" \
  -d '{
    "flow-id": "security-graphrag",
    "collection": "company-docs",
    "query": "What are our top cybersecurity vulnerabilities?",
    "max-entities": 50,
    "relevance-threshold": 0.7,
    "include-provenance": true
  }'

The TypeScript client library (@trustgraph/client) provides WebSocket-based real-time communication for building production UIs. Python and CLI interfaces are also available.

Step 4: Add Ontologies for Precision (Optional but Recommended)

# Upload an OWL ontology
cat domain-ontology.owl | tg-put-config-item \
  --type ontology \
  --key security-ontology \
  --stdin

# Create an OntologyRAG flow
tg-start-flow \
  -n onto-rag \
  -i security-onto-rag \
  -d "Ontology-driven security knowledge extraction"

# Process with ontology enforcement
tg-start-library-processing \
  --flow-id security-onto-rag \
  --document-id doc-security-2025 \
  --collection company-docs

The Unglamorous Reality: What Graph RAG Actually Costs You

Every GraphRAG vendor demo shows the happy path. Here is what they leave out.

Ingestion Is Expensive and Slow

Building a knowledge graph requires running every document chunk through an LLM for entity and relationship extraction. This is not free. Microsoft’s original GraphRAG architecture dedicates roughly 75% of total indexing cost to graph extraction alone. One production deployment reported $33,000 in indexing costs for a large dataset before a single query was run. A 10,000-document corpus that costs under $5 to embed in a vector database costs $50-200 to process through a GraphRAG pipeline. For context: that is a 10-40x cost multiplier at ingestion time.

Entity Resolution Is the Silent Killer

When your LLM extracts entities from thousands of documents, it will create duplicates. “IBM”, “International Business Machines”, “IBM Corp”, and “Big Blue” are all the same entity. If your entity resolution accuracy drops below roughly 85%, the errors compound exponentially through multi-hop queries. At 85% accuracy with 5 hops, fewer than half your answers remain trustworthy (0.85^5 = 44%). This is not a theoretical problem; it is the most common failure mode in production GraphRAG systems, and neither TrustGraph nor anyone else has fully solved it.

Ontology Maintenance Is a Governance Burden

TrustGraph’s OntologyRAG produces more precise graphs, no question. But someone has to design that ontology, maintain it as your domain evolves, and ensure new documents conform to the schema. In practice, this means a dedicated knowledge engineer or a committee that reviews and updates the ontology quarterly. For organisations that already struggle to maintain a data dictionary, adding OWL ontology governance is a non-trivial ask.

Three Indexes, Three Consistency Problems

Production graph RAG requires keeping three synchronized stores: a graph index for structural traversal, a vector index for semantic similarity, and often a text index for full-text search. Every document addition, update, or deletion must propagate across all three and trigger entity resolution re-evaluation. This is, bluntly, a data engineering nightmare that most demos conveniently skip.

Extraction Hallucinations Are Real

The LLM that extracts entities and relationships from your documents will hallucinate some of them. It will invent relationships that do not exist in the source text, misattribute properties, and occasionally create phantom entities. These extraction hallucinations then become “facts” in your knowledge graph, where they are retrieved with the same confidence score as legitimate data. Garbage in, graph out. Every production deployment needs a quality assurance pipeline to catch extraction errors, and most teams underestimate this effort.

Query Latency Is Not Milliseconds

Vector search returns results in single-digit milliseconds. Graph RAG queries involve: vector lookup to find entry points, graph traversal across multiple hops, LLM-based relevance scoring of candidate edges, subgraph assembly, and finally LLM generation. End-to-end latency is typically 2-15 seconds depending on graph size and traversal depth. For interactive applications where users expect sub-second responses, this is a hard constraint that no amount of clever engineering fully eliminates.

When Context Graphs Are Essential (Real Use Cases)

Context graphs are not a universal hammer. They are a precision instrument for specific categories of problem. Here is where they earn their keep:

  • Financial compliance and audit: A financial analyst querying regulatory exposure across multiple counterparties needs multi-hop reasoning across hundreds of documents. Every answer must be traceable to source documents for regulatory compliance. SowFin, a corporate finance company, uses TrustGraph to bring accurate, explainable insights to corporate finance.
  • Security operations: Huntbase uses TrustGraph to build Context Cores for SecOps, where AI hallucinations in threat detection are not just inconvenient but dangerous. Cybersecurity requires connecting events, metadata, and threat indicators across thousands of log entries with full provenance.
  • Medical and clinical research: Clinical informaticists analysing treatment interactions across patient comorbidities need graph traversal to connect drugs, conditions, contraindications, and outcomes across multiple clinical databases. Approximate similarity search is not acceptable when lives are involved.
  • Supply chain management: Tracing component dependencies multiple tiers deep requires genuine relationship traversal. “Which suppliers are affected if factory X in Shenzhen shuts down?” demands multi-hop graph queries that Vector RAG simply cannot do.
  • Legal document analysis: Connecting clauses across contracts, precedents across cases, and regulations across jurisdictions. Every connection must be verifiable and traceable.
  • Enterprise knowledge management: The “monograph” approach (a single unified graph across all your organisation’s knowledge) enables discovery of relationships across departments and domains that siloed systems miss. This is not unique to TrustGraph; any sufficiently connected knowledge graph achieves this, whether built with Neo4j, Microsoft GraphRAG, or TrustGraph.

When Context Graphs Are Overkill (Be Honest With Yourself)

Now for the part that most GraphRAG vendors would rather you did not read. Context graphs are genuinely overkill for a significant number of common AI use cases. Using one when you do not need one is like hiring a structural engineer to hang a picture frame.

  • Small datasets that fit in context: If your entire corpus is under 50 pages (roughly 40,000 tokens), skip RAG entirely. Stuff it all into the prompt. It costs $0.01 per query versus $0.05 for a RAG pipeline, deploys in a day versus four weeks, and the LLM can attend to all of it directly. No chunking, no embeddings, no graph. Simple prompt engineering wins.
  • General knowledge queries: Questions the LLM already knows the answer to (world history, common programming patterns, basic science) gain nothing from RAG. You are adding latency without improving accuracy.
  • Simple semantic lookup: “Find me documents similar to this one.” A vector store alone is faster, cheaper, and simpler. You do not need graph traversal for similarity search.
  • Ephemeral data with unstable entities: If your corpus changes hourly and the entities and relationships are not stable enough to maintain, the cost of continuous knowledge extraction will exceed the value. A vector store with frequent re-indexing may be more practical.
  • Speed-critical applications: Vector RAG delivers millisecond responses. GraphRAG takes seconds, sometimes minutes for complex traversals. If sub-100ms latency is a hard requirement, graphs add unacceptable overhead.
  • Prototyping and MVPs: Vector RAG takes hours to set up. A full knowledge graph pipeline takes weeks. For a proof of concept, start with Vector RAG and upgrade to GraphRAG only when you have evidence that relationship-aware retrieval would improve your results.
  • Single-fact lookup: “What is the capital of France?” Both approaches achieve 94-95% accuracy on simple factual queries. The graph adds no value here.

The honest decision matrix: if your questions require understanding relationships between entities, connecting information across multiple documents, or producing explainable, auditable answers, you need a graph. But “need a graph” does not mean “need TrustGraph specifically”. A Neo4j instance with LangChain retrieval chains, Microsoft GraphRAG with community summaries, or LlamaIndex’s knowledge graph module may be simpler to deploy, cheaper to run, and sufficient for your use case. Evaluate the alternatives before committing to the heaviest solution. And if your data fits in a context window, you might not need RAG at all.

The Neuro-Symbolic Promise (and Why This Actually Matters)

Daniel Davis makes a point in the demo that deserves its own section. The deep learning camp believed that enough data and compute would magically produce ground truth. Throw enough parameters at the problem and the model would learn to reason. The neuro-symbolic camp argued you would always need richer semantic structures because language is fundamentally ambiguous, and statistical pattern matching cannot resolve that ambiguity alone.

Context graphs are the practical vindication of the neuro-symbolic position. The LLM handles what it is good at: understanding natural language queries, interpreting intent, generating fluent responses. The graph handles what it is good at: storing precise facts, maintaining relationships, providing provenance, enabling deterministic traversal. Neither can solve the full problem alone. Together they produce something that neither approach could achieve independently.

This division of labour, as described in the TrustGraph demo, is not just a technical architecture decision. It is a philosophical one about what AI systems should and should not be trusted to do. LLMs should generate language. They should not be trusted as databases. Graphs should store and retrieve facts. They should not be expected to understand natural language. Each doing what it does best: that is the future of reliable AI systems.

Other Resources Worth Watching

The TrustGraph video is one perspective in a rapidly maturing field. These resources provide alternative viewpoints and competing approaches:

What to Check Right Now

  • Audit your current RAG pipeline’s failure modes. Ask it multi-hop questions that require connecting information across documents. If it fails or hallucinates, you have a graph-shaped problem.
  • Test the “same question, different words” scenario. Ask semantically equivalent questions and compare outputs. If the answers diverge wildly, your retrieval layer lacks semantic understanding.
  • Measure your chunk sizes. If you are chunking above 1,000 tokens, you are likely losing information to the lost-in-the-middle effect. Consider chunking at 500-1,000 tokens regardless of your context window size.
  • Evaluate whether you actually need a graph. Run the honest assessment: does your use case require multi-hop reasoning, explainability, or relationship traversal? If not, a well-tuned Vector RAG pipeline might be all you need.
  • Try TrustGraph locally. Run npx @trustgraph/config, choose Docker, and docker compose up -d. Load a few documents and explore the Workbench. You can have a working context graph in under an hour. It is free and open source (Apache 2.0).
  • Check your explainability requirements. If you are building for regulated industries (finance, healthcare, legal), ask whether you can trace every AI-generated answer back to its source documents. If the answer is no, context graphs are not optional, they are mandatory.
Dark minimalist checklist visualization with glowing cyan checkmarks on black, tech aesthetic
The real question is not whether context graphs are useful. It is whether your use case demands them.

Video Attribution

This article is based on the TrustGraph demo “Context Graphs in Action” by Daniel Davis and Mark Adams. The video demonstrates TrustGraph 2’s context graph capabilities, explainability features, and source provenance using a London venues dataset. No marketing, no hype, just a real demo of real context graphs.


TrustGraph is open source and available at github.com/trustgraph-ai/trustgraph. Documentation at docs.trustgraph.ai. Community on Discord.

nJoy 😉

Google’s TurboQuant Just Halved the Cost of Running Every AI Model on Earth

Google just published a compression algorithm so efficient that it sent memory chip stocks tumbling across three continents in a single trading session. SK Hynix down 6%. Samsung down 5%. Micron bleeding for six days straight. Billions of dollars in market capitalisation evaporated because a team of researchers figured out a cleverer way to point at things. That is not a metaphor. That is literally what they did. Welcome to TurboQuant, the algorithm that halves the cost of running every large language model on the planet, and the wildest part is that Google just gave it away for free.

Dark abstract visualization of AI memory compression with polar coordinates, cyan and deep blue vectors converging on black background
TurboQuant: pointing instead of giving directions

What the KV Cache Actually Is (And Why Everyone Should Care)

Before we get into what Google built, you need to understand the bottleneck they solved. Every large language model, whether it is ChatGPT, Claude, Gemini, or Llama, runs on the transformer architecture. And transformers have this mechanism called attention, which is how the model figures out what words mean in context.

Here is a quick thought experiment. If I say “it was tired,” you have no idea what “it” refers to. A dog? A server? A metaphor for the state of modern JavaScript? But if I say “the animal didn’t cross the street because it was too tired,” suddenly “it” is loaded with meaning. It is an animal. It didn’t cross. It was tired. Your brain just did what transformers do: it looked at the surrounding words to figure out what one word actually means.

The problem is that transformers need to remember these relationships. Every time the model processes a token, it calculates how that token relates to every other token it has seen so far. These relationships get stored in what is called the key-value cache (KV cache). Think of it as a filing cabinet. Each “folder” has a label on the front (the key, which is a rough tag so the model can find it quickly) and detailed notes inside (the value, which is the actual rich meaning and relationships).

The catch? This filing cabinet grows linearly with context length. A 128K context window means 128,000 tokens worth of folders, each containing high-dimensional vectors stored at 16-bit precision. For a model like Llama 3.1 with 8 billion parameters, the KV cache alone can eat several gigabytes of GPU memory. For larger models with longer contexts, it becomes the single biggest memory bottleneck in the entire inference pipeline. Not the model weights. Not the activations. The KV cache.

“Vector quantization is a powerful, classical data compression technique that reduces the size of high-dimensional vectors. This optimization addresses two critical facets of AI: it enhances vector search […] and it helps unclog key-value cache bottlenecks by reducing the size of key-value pairs.” — Google Research, TurboQuant Blog Post (March 2026)

Traditional approaches to compressing the KV cache use something called quantisation, which reduces the precision of the stored numbers. Instead of 16 bits per value, you use 8 bits, or 4 bits. The problem is that most quantisation methods need to store calibration constants (a zero point and a scale factor) for every small block of data. These constants have to be stored at full precision, which adds 1-2 extra bits per number. You are trying to compress, but your compression metadata is eating into your savings. It is like buying a wallet so expensive it defeats the purpose of saving money.

PolarQuant: The Art of Pointing Instead of Giving Directions

This is where Google’s insight gets genuinely elegant. Imagine you are standing in a city and someone asks you how to get to an office on the third floor of a building two blocks east and three blocks north. The standard approach is step-by-step Cartesian directions: go two blocks east, then three blocks north, then up three floors. Each dimension gets its own coordinate.

But there is another way. You could just point at the building and say “it is 500 feet away in that direction.” One angle, one distance. Same destination, less information to store.

That is PolarQuant. Instead of storing each dimension of a vector independently (the Cartesian way), it converts the vector into polar coordinates: a radius (how strong or important the data is) and an angle (what direction it points in, which encodes its meaning).

“Instead of looking at a memory vector using standard coordinates that indicate the distance along each axis, PolarQuant converts the vector into polar coordinates […] This is comparable to replacing ‘Go 3 blocks East, 4 blocks North’ with ‘Go 5 blocks total at a 37-degree angle’.” — Google Research, TurboQuant Blog Post

Why is this so much more compressible? Here is the key mathematical insight. When you randomly rotate high-dimensional vectors (which is PolarQuant’s first step), something beautiful happens: the coordinates follow a concentrated Beta distribution. In plain English, the angles cluster tightly into a predictable, narrow range. They are not scattered randomly across all possible values. They bunch up.

This means the model no longer needs to perform expensive data normalisation. Traditional methods map data onto a “square” grid where the boundaries change constantly and need to be recalculated and stored for every block. PolarQuant maps data onto a fixed, predictable “circular” grid where the boundaries are already known. No calibration constants needed. No overhead.

Here is a concrete way to think about it. Imagine you are mapping people on a 2D chart where the X-axis is age and the Y-axis represents some semantic concept. In Cartesian coordinates, you store (x, y) for each person. In polar coordinates, you store (distance from origin, angle). The angle between “grandmother” and “grandfather” is predictable. The angle between “boy” and “girl” is predictable. These patterns are exploitable for compression precisely because they are so regular in high dimensions.

// Cartesian: store each dimension independently
// For a d-dimensional vector, you need d values at full precision
const cartesian = { x: 3.14159, y: 2.71828, z: 1.41421 };
// Plus quantisation overhead: zero_point + scale per block
// Adds 1-2 extra bits per value

// Polar (PolarQuant): store radius + angles
// After random rotation, angles are tightly concentrated
// No calibration constants needed
const polar = { radius: 4.358, angle_1: 0.7137, angle_2: 0.3927 };
// The angles live in a predictable, narrow range
// Quantise directly onto a fixed grid -- zero overhead
Dark technical diagram showing Cartesian to polar coordinate transformation, amber vectors on deep blue grid, black background
From step-by-step directions to a single compass bearing

QJL: The 1-Bit Error Checker That Makes It Lossless

PolarQuant does the heavy lifting. It is responsible for the bulk of the compression. But no compression is perfect, and PolarQuant leaves behind a tiny residual error. This is where the second component comes in, and it is arguably just as clever.

The Quantised Johnson-Lindenstrauss (QJL) algorithm takes the small error left over from PolarQuant and squashes it down to a single sign bit per value: +1 or -1. That is it. One bit. The technique is based on the Johnson-Lindenstrauss lemma, a foundational result in dimensionality reduction that says you can project high-dimensional data into a much lower-dimensional space whilst preserving the distances between points.

What QJL does specifically is eliminate bias in the inner product estimation. This is critical because attention scores in transformers are computed as inner products (dot products) between query and key vectors. If your compression introduces a systematic bias in these dot products, the model’s attention mechanism starts paying attention to the wrong things. It is like having a compass that is consistently off by 3 degrees; every direction you follow drifts further from where you actually want to go.

QJL uses a special estimator that balances a high-precision query vector against the low-precision compressed data. The result is an unbiased inner product estimate with zero memory overhead. The 1-bit correction is so small it is essentially free to store, but it perfectly cancels out the residual error from PolarQuant.

// Stage 1: PolarQuant (main compression)
// 16-bit KV cache -> ~3 bits per channel
// Does most of the heavy lifting
// Tiny residual error remains

// Stage 2: QJL (error correction)
// Takes the residual from PolarQuant
// Reduces it to 1 sign bit (+1 or -1) per value
// Eliminates bias in attention score computation
// Memory overhead: essentially zero

// Combined: TurboQuant
// 3-bit KV cache with ZERO accuracy loss
// No retraining, no fine-tuning, no calibration
// Just swap it in and the model stays identical

Together, PolarQuant + QJL = TurboQuant. The compression engine and its error checker. The paper proves that TurboQuant achieves distortion rates within a factor of approximately 2.7 of the information-theoretic lower bound, the absolute mathematical limit of how well any quantiser could ever perform. In the language of information theory, this is approaching the Shannon limit. There is not much room left to improve.

“We also provide a formal proof of the information-theoretic lower bounds on best achievable distortion rate by any vector quantizer, demonstrating that TurboQuant closely matches these bounds, differing only by a small constant (approx 2.7) factor.” — Zandieh et al., TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate, arXiv:2504.19874

The Numbers: What TurboQuant Actually Delivers

Theory is nice, but what actually happened when they tested this on real hardware with real models? Google ran TurboQuant through a gauntlet of benchmarks on open-source models (Gemma, Mistral, Llama) running on NVIDIA H100 GPUs. The results are not incremental. They are a step change.

The Headline Numbers

  • 6x KV cache memory reduction. A cache that previously required 16 bits per value now needs under 3 bits. On a model that was using 6 GB of KV cache memory, you now need roughly 1 GB.
  • Up to 8x attention speedup. The attention computation (the most expensive part of inference) runs up to 8 times faster on H100 GPUs. This does not mean the entire model is 8x faster, but the bottleneck operation is.
  • Zero accuracy loss. At 3.5 bits per channel, TurboQuant achieves what the authors call “absolute quality neutrality.” The compressed model produces identical results to the uncompressed model. Even at 2.5 bits per channel, degradation is marginal.
  • No retraining required. This is not a new model architecture. There is no fine-tuning step, no calibration dataset, no model-specific tuning. You slot TurboQuant into the inference pipeline and the existing model just works better.

Benchmark Breakdown

The team tested across five major long-context benchmarks:

  • LongBench — question answering, summarisation, code generation across diverse tasks
  • Needle in a Haystack — finding one specific piece of information buried in massive documents
  • ZeroSCROLLS — long-document understanding tasks
  • RULER — synthetic benchmarks that stress-test context window utilisation
  • L-Eval — comprehensive evaluation of long-context capabilities

Across all of them, TurboQuant achieved perfect downstream results whilst reducing KV cache memory by at least 6x. PolarQuant alone was nearly lossless. With QJL added on top, it became mathematically unbiased.

Dark performance chart showing compression ratios and speedup metrics, cyan bars on dark grid, minimal tech aesthetic on black
6x compression, 8x speedup, zero loss. The rare triple.

The Stock Market Bloodbath (And Why Analysts Say Calm Down)

Google published TurboQuant on 24 March 2026. Within 48 hours, billions of dollars had been wiped off memory chip stocks across three continents.

The logic seemed straightforward: if AI models need 6x less memory, companies that make memory chips are going to sell fewer chips. Right?

The Damage Report

  • SK Hynix (South Korea) — down 6.23%
  • Samsung (South Korea) — down nearly 5%
  • Kioxia (Japan) — down nearly 6%
  • Micron (USA) — down over 20% across six trading sessions
  • SanDisk (USA) — down 11%
  • Western Digital (USA) — down 6.7%
  • Seagate (USA) — down 8.5%

The broader Korean KOSPI index fell as much as 3%. Matthew Prince, CEO of Cloudflare, called it “Google’s DeepSeek moment,” referencing the January 2025 DeepSeek sell-off that wiped nearly a trillion dollars off the Nasdaq.

But here is the thing. Analysts are not panicking. In fact, most of them are telling investors to buy the dip.

Ray Wang, a memory analyst at SemiAnalysis, told CNBC:

“When you address a bottleneck, you are going to help AI hardware to be more capable. And the training model will be more powerful in the future. When the model becomes more powerful, you require better hardware to support it.” — Ray Wang, SemiAnalysis, via CNBC (March 2026)

Ben Barringer, head of technology research at Quilter Cheviot, was even more direct: “Memory stocks have had a very strong run and this is a highly cyclical sector, so investors were already looking for reasons to take profit. The Google Turboquant innovation has added to the pressure, but this is evolutionary, not revolutionary. It does not alter the industry’s long-term demand picture.”

For context, memory stocks had been on an absolute tear before this. Samsung was up nearly 200% over the prior year. SK Hynix and Micron were up over 300%. A correction was arguably overdue, and TurboQuant gave skittish investors the excuse they needed.

Jevons Paradox: Why Efficiency Makes You Use More, Not Less

The most important framework for understanding TurboQuant’s long-term impact is not computer science. It is economics. Specifically, a concept from 1865.

In The Coal Question, economist William Stanley Jevons documented something counterintuitive: when James Watt’s innovations made steam engines dramatically more fuel-efficient, Britain’s coal consumption did not fall. It increased tenfold. The efficiency gains lowered coal’s effective cost, which made it economical for new applications and industries. The per-unit savings were overwhelmed by the explosion in total usage.

This is the Jevons paradox, and it has been playing out in AI with striking precision. Between late 2022 and 2025, the cost of running large language models collapsed roughly a thousandfold. GPT-4-equivalent performance dropped from $20 to $0.40 per million tokens. Did people use fewer tokens? Enterprise generative AI spending skyrocketed from $11.5 billion in 2024 to $37 billion in 2025, a 320% increase. When OpenAI dropped API prices by 10x, API calls grew 100x.

The same pattern will almost certainly play out with TurboQuant. If it suddenly costs half as much to run a Frontier model, companies will not pocket the savings and go home. They will run bigger models, longer contexts, more agents, more concurrent sessions. Workloads that were previously too expensive become viable. The 200K-context analysis that cost too much to justify? Now it makes business sense. The always-on AI assistant that was too expensive to run 24/7? Now it is affordable.

Morgan Stanley’s analysts made exactly this argument, citing Jevons paradox to characterise the long-term impact on storage demand as “neutral to positive.” The market overpriced the short-term headline and underpriced the second-order effects.

What This Means for Anyone Using AI Right Now

Let us get concrete about who benefits and how.

Enterprises Running Models at Scale

If you are an enterprise running large language models in production, TurboQuant translates roughly to a 50% reduction in inference costs. This is not a marginal optimisation. This applies to every prompt, every API call, every chatbot response, every agentic workflow. API calls get cheaper. Faster responses. More requests per second on the same hardware. The ability to run longer context windows without hitting memory limits.

Context Windows Get Bigger on the Same Hardware

If a GPU was maxing out at a certain context length because the KV cache filled the available memory, TurboQuant effectively multiplies the available context by 6x. A model that topped out at 32K tokens on a given GPU could now handle 192K tokens. This is significant for code analysis, legal document review, medical record processing, and any workload where more context means better output.

The Anthropic Mythos Situation

Anthropic’s upcoming Mythos model has been described as “very expensive for us to serve, and will be very expensive for our customers to use.” Early pricing estimates suggest 2-5x the cost of Claude Opus. TurboQuant could meaningfully change that calculus. If inference costs drop by half, a model that was borderline unviable for production use cases suddenly becomes economically defensible. Whether Anthropic adopts TurboQuant specifically or implements similar techniques, the pressure to do so just became enormous.

Individual Power Users

Andrej Karpathy, former Tesla AI lead and OpenAI researcher, recently said in an interview that he gets “nervous when I have subscription left over” because “that just means I haven’t maximised my token throughput.” He now runs multiple AI agents in parallel across separate repository branches, treating token consumption as his primary productivity constraint. NVIDIA CEO Jensen Huang has said he expects employees earning $500,000 to use $250,000 worth of tokens. If TurboQuant halves the cost of those tokens, the effective value of every subscription doubles overnight.

Dark futuristic visualization of AI agents running in parallel across GPU clusters, purple and cyan glow on black background
Same hardware, twice the output. The new math of AI inference.

Google’s Quiet Giant Move: Why They Published Instead of Hoarding

There is a pattern here that deserves attention. In 2017, a team at Google published “Attention Is All You Need” by Vaswani et al., the paper that introduced the transformer architecture. That single paper became the foundation for GPT, Claude, Gemini, Llama, Mistral, and essentially every large language model in existence. Most of Google’s competitors are built on Google’s published research.

They did it again with TurboQuant. Google could have kept this internal. They could have quietly deployed it across their infrastructure, pocketed the 50% cost savings on Gemini inference, and used the competitive advantage to undercut everyone else on pricing. That is the standard playbook. But they published it. The paper is on arXiv. The blog post explains the technique in detail. Community implementations appeared on PyPI and GitHub within days.

This is not altruism (Google benefits enormously from being the company that publishes foundational research, and they have the infrastructure to move fastest on their own inventions). But the effect is real. Every company running AI models, every open-source project, every independent developer benefits from this work being public.

As Martin Kleppmann writes in Designing Data-Intensive Applications, the most impactful systems are often the ones that reduce the cost of doing something by an order of magnitude, because they do not just make existing use cases cheaper; they create entirely new categories of application that were previously uneconomical. TurboQuant is precisely that kind of step change.

When TurboQuant Does Not Apply (The Honest Bit)

No article from this site would be credible without the caveats section, so here they are:

Case 1: Training Is Untouched

TurboQuant is an inference optimisation. It compresses the KV cache, which is used during inference (when the model generates responses). It does not reduce the cost of training a model. The multi-billion-dollar GPU clusters that companies like Google, OpenAI, and Meta use to train Frontier models are not affected. Training has its own bottlenecks (gradient accumulation, all-reduce communication, activation memory), and TurboQuant addresses none of them.

Case 2: It Only Compresses the KV Cache

The 6x memory reduction applies specifically to the KV cache, not to the model weights, not to the activations, and not to the total GPU memory usage. For many inference workloads, the KV cache is the dominant memory consumer, especially at long context lengths. But for short prompts on large models, the model weights themselves might be the bottleneck. TurboQuant helps a lot in the first scenario and less in the second.

Case 3: You Still Need GPUs

TurboQuant makes existing hardware more efficient. It does not eliminate the need for GPUs (or TPUs). You still need compute to run models. What changes is how much work each GPU can do. Think of it as improving fuel efficiency in a car: you still need the car, and you still need fuel, but you go further on each tank.

Case 4: The 8x Speedup Is for Attention, Not End-to-End

The headline “8x speedup” refers to the attention computation specifically, not the total inference time. A full model forward pass includes many other operations (feedforward layers, layer norms, embedding lookups). The end-to-end speedup depends on what fraction of total inference time is spent on attention. For long-context workloads, it is a large fraction. For short prompts, less so.

How This Actually Gets Deployed

One of TurboQuant’s strongest properties is how easy it is to adopt. Unlike techniques that require retraining or fine-tuning, TurboQuant is data-oblivious: it works without any dataset-specific preprocessing. The deployment path looks like:

  1. No model changes. The model weights, architecture, and training are all untouched. TurboQuant operates entirely at the inference layer.
  2. Swap the KV cache quantiser. Replace the existing KV cache storage with TurboQuant’s polar coordinate quantisation. This is a software change in the inference engine.
  3. Choose your bit-width. At 3.5 bits per channel, you get zero accuracy loss. At 2.5 bits per channel, you get even more compression with marginal degradation. Pick based on your quality requirements.
  4. Deploy. Run the same prompts, get the same results, use 6x less KV cache memory, and compute attention up to 8x faster.

Community implementations have already appeared. A pip-installable turboquant package is on PyPI. Third-party implementations in MLX (for Apple Silicon) and Triton (for custom GPU kernels) were published within days of the announcement. The official Google code is expected in Q2 2026.

# Community implementation (illustrative)
# pip install turboquant
from turboquant import TurboQuantConfig, apply_turboquant

config = TurboQuantConfig(
    bits_per_channel=3.5,   # Zero accuracy loss
    enable_qjl=True,        # Error correction stage
)

# Apply to any HuggingFace model's KV cache
model = apply_turboquant(model, config)

# Inference runs as normal -- same API, same outputs
# But KV cache is now 6x smaller and attention is up to 8x faster
output = model.generate(input_ids, max_new_tokens=512)

What to Check Right Now

  • Audit your KV cache memory usage. If you are running models in production, profile how much GPU memory your KV cache consumes. If it is a significant fraction of total memory (common for long-context workloads), TurboQuant could give you an immediate and substantial improvement.
  • Watch for framework integration. Keep an eye on vLLM, TensorRT-LLM, and HuggingFace TGI for native TurboQuant support. Once it lands in these frameworks, adoption becomes a config flag.
  • Re-evaluate your context length limits. If you capped context length because of memory constraints, TurboQuant may let you lift those caps on existing hardware. Longer context often means better output quality.
  • Read the actual paper. The TurboQuant paper (arXiv:2504.19874) and the PolarQuant paper (arXiv:2502.02617) are both well-written and surprisingly accessible. The Google Research blog post is an excellent entry point if you want the intuition without the proofs.
  • Don’t panic-sell memory stocks based on headlines. The Jevons paradox has held true for every major compute efficiency improvement in history. Efficiency does not reduce demand; it creates it. The analysts calling this “evolutionary, not revolutionary” for the memory industry are probably right.
  • Try it yourself. The community turboquant PyPI package and the turboquant-pytorch GitHub repo let you test it on your own models today.

Video Attribution

This article was inspired by Wes Roth’s excellent breakdown of TurboQuant. Watch the full video below:


nJoy 😉

Lesson 55 of 55 (Capstone): Full MCP Platform – Registry, Gateway, and Agents

This final capstone assembles everything from the course into a complete MCP platform: a registry for server discovery, an API gateway for authentication and routing, a collection of domain-specific MCP servers, and a web interface where teams can explore available tools, run agent queries, and review audit logs. When you deploy this platform, you have the infrastructure that enterprise teams need to build and manage AI-powered workflows on MCP.

Full MCP platform architecture registry gateway domain servers web interface audit logs monitoring dark
The complete MCP platform: registry, gateway, domain servers, and a management web interface.

Platform Architecture Overview

Component Purpose Lesson Reference
MCP Registry Server discovery and health tracking Lesson 44
API Gateway Auth (OAuth), rate limiting, routing Lessons 31, 41
Domain MCP Servers Business tools (CRM, docs, analytics) Parts I-III
Multi-Provider Agent Route queries to OpenAI/Claude/Gemini Lessons 28-30
Audit Service Structured logs, compliance reporting Lesson 35
Observability Stack Prometheus + Grafana + OpenTelemetry Lesson 42
Management UI Tool explorer, query interface, logs This lesson

Every row in this table maps to a lesson you have already completed. The capstone’s job is not to teach new concepts but to show how they compose into a real system. In production, these components run as separate services that communicate over HTTP and message queues, so a failure in analytics does not bring down the gateway or registry.

Platform Bootstrap Script

// platform/bootstrap.js
// Register all MCP servers with the registry on startup

const REGISTRY_URL = process.env.REGISTRY_URL ?? 'http://localhost:4000';

const MCP_SERVERS = [
  {
    id: 'products',
    name: 'Product Catalog Server',
    description: 'Search, browse, and manage product catalog',
    url: process.env.PRODUCTS_SERVER_URL,
    tags: ['products', 'catalog', 'inventory'],
    auth: { type: 'bearer' },
    healthUrl: `${process.env.PRODUCTS_SERVER_URL}/health`,
  },
  {
    id: 'analytics',
    name: 'Analytics Server',
    description: 'Business metrics, trends, and reports',
    url: process.env.ANALYTICS_SERVER_URL,
    tags: ['analytics', 'metrics', 'reports'],
    auth: { type: 'bearer' },
    healthUrl: `${process.env.ANALYTICS_SERVER_URL}/health`,
  },
  // ... more servers
];

async function registerAll() {
  for (const server of MCP_SERVERS) {
    await fetch(`${REGISTRY_URL}/servers`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(server),
    });
    console.log(`Registered: ${server.name}`);
  }
}

await registerAll();

Registry-driven discovery is what makes this platform extensible. When a new team wants to expose their internal API as an MCP server, they register it here and it becomes automatically available to the agent and the management UI. No code changes, no redeployment of the gateway – just a single POST to the registry endpoint.

Management API

// platform/management-api.js
// REST API for the management UI

import express from 'express';
const app = express();
app.use(express.json());

// List all registered MCP servers with health
app.get('/api/platform/servers', async (req, res) => {
  const response = await fetch(`${REGISTRY_URL}/status`);
  res.json(await response.json());
});

// List all tools from all healthy servers
app.get('/api/platform/tools', async (req, res) => {
  const discovery = new McpDiscoveryClient(REGISTRY_URL);
  await discovery.connect();
  const tools = await discovery.getAllTools();
  res.json({ tools, count: tools.length });
});

// Execute an agent query
app.post('/api/platform/query', async (req, res) => {
  const { question, provider = 'auto', userId } = req.body;
  // Rate limit, auth check, then:
  const agent = await createAgent({ scope: getUserScope(userId), preferredProvider: provider });
  const answer = await agent.run(question);
  res.json({ answer });
  await agent.close();
});

// Get audit logs for a user
app.get('/api/platform/audit', async (req, res) => {
  const { userId, from, to, limit = 50 } = req.query;
  const logs = await auditDb.query({ userId, from, to, limit });
  res.json({ logs });
});

app.listen(5000, () => console.log('Management API on :5000'));
Platform component interaction diagram registry discovery client agent router domain servers management UI dark
Component interaction: the discovery client queries the registry, builds the tool set, and routes through the agent.

One risk in a distributed platform like this: if the registry goes down, no new agent sessions can discover tools. The management API’s /tools endpoint depends on a live registry connection. In production, cache the last-known server list in the gateway so it can continue serving requests even during a brief registry outage.

The audit endpoint at /api/platform/audit is what compliance teams will query most frequently. It lets managers review what their team asked the AI, which tools it called, and whether any requests failed. Without this, AI assistants become a black box that security teams will rightly refuse to approve.

Docker Compose – Full Platform

services:
  registry:
    build: ./registry
    ports: ["4000:4000"]
    depends_on: [redis]

  gateway:
    build: ./gateway
    ports: ["3000:3000"]
    environment:
      REGISTRY_URL: http://registry:4000
    depends_on: [registry, redis]

  management-api:
    build: ./platform
    ports: ["5000:5000"]
    depends_on: [gateway, registry]

  products-server:
    build: ./servers/products
    environment:
      DATABASE_URL: ${PRODUCTS_DB_URL}

  analytics-server:
    build: ./servers/analytics
    environment:
      DATABASE_URL: ${ANALYTICS_DB_URL}

  redis:
    image: redis:7-alpine

  prometheus:
    image: prom/prometheus:v2.50.0
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports: ["9090:9090"]

  grafana:
    image: grafana/grafana:10.3.0
    ports: ["3001:3000"]
    depends_on: [prometheus]

Eight services in a single Compose file. This is a realistic local development setup, but for production you would break these into separate deployment units – the gateway and domain servers behind a load balancer, Prometheus and Grafana in a dedicated monitoring namespace, and the registry behind its own high-availability cluster.

What You Have Built

Across all 53 lessons and 5 capstone projects you have built:

  • MCP servers using every primitive: tools, resources, prompts, sampling, elicitation, roots
  • Clients for all three major LLM providers: OpenAI, Claude, and Gemini
  • Production infrastructure: Docker, Kubernetes, Nginx, Redis
  • Security stack: OAuth 2.0, RBAC, input validation, audit logging, secrets management
  • Multi-agent systems: A2A delegation, LangGraph integration, state management
  • Observability: Prometheus metrics, OpenTelemetry tracing, structured logs
  • A complete enterprise platform: registry, gateway, domain servers, management UI

MCP is the connective tissue of the AI application stack. You now know it from protocol fundamentals to enterprise deployment. Go build something important.

nJoy 😉

Lesson 54 of 55 (Capstone): Enterprise Assistant With Auth, RBAC, and Audit Logs

This capstone builds the most complete MCP application in the course: an enterprise AI assistant with OAuth 2.0 authentication, RBAC tool access control, full audit logging, rate limiting, and a multi-provider backend. It brings together patterns from every major part of the course into a single deployable system. Deploy it and you have a production-ready enterprise AI assistant that your security team can audit and your compliance team can sign off on.

Enterprise AI assistant full architecture OAuth RBAC audit logging rate limiting multi-provider MCP dark
Enterprise-grade: OAuth tokens + RBAC scope filtering + audit logs + rate limiting + multi-provider routing.

System Architecture

enterprise-assistant/
├── gateway/
│   ├── server.js          (HTTP API gateway with auth + rate limiting)
│   ├── auth.js            (OAuth 2.0 token validation, JWKS)
│   ├── rbac.js            (Role-to-scope mapping, tool filtering)
│   ├── audit.js           (Structured audit logging)
│   └── rate-limiter.js    (Per-user rate limiting with Redis)
├── agent/
│   ├── router.js          (Multi-provider routing: OpenAI/Claude/Gemini)
│   └── executor.js        (Tool loop with retry, timeout, token budget)
├── servers/
│   ├── knowledge-server.js (Knowledge base search)
│   └── actions-server.js   (Business action tools)
└── docker-compose.yml

The Gateway Server

// gateway/server.js
import express from 'express';
import { validateToken, getRolesFromToken } from './auth.js';
import { getScopeFromRoles, getAllowedTools } from './rbac.js';
import { AuditLogger } from './audit.js';
import { createRateLimiter } from './rate-limiter.js';
import { createAgent } from '../agent/router.js';

const app = express();
app.use(express.json());

const auditLog = new AuditLogger();
const rateLimiter = createRateLimiter(60);  // 60 req/min per user

// Health check
app.get('/health', (req, res) => res.json({ status: 'ok', uptime: process.uptime() }));
app.get('/metrics', (req, res) => res.end(getPrometheusMetrics()));

// Main API endpoint
app.post('/api/ask', async (req, res) => {
  const requestId = crypto.randomUUID();

  // 1. Authenticate
  const authHeader = req.headers.authorization;
  if (!authHeader?.startsWith('Bearer ')) {
    return res.status(401).json({ error: 'Bearer token required' });
  }

  let claims;
  try {
    claims = await validateToken(authHeader.slice(7));
  } catch {
    return res.status(401).json({ error: 'Invalid token' });
  }

  // 2. Rate limit
  try {
    await rateLimiter.consume(claims.sub);
  } catch (rl) {
    res.setHeader('Retry-After', Math.ceil(rl.msBeforeNext / 1000));
    return res.status(429).json({ error: 'Rate limit exceeded' });
  }

  // 3. Determine role and scope
  const roles = getRolesFromToken(claims);
  const scope = getScopeFromRoles(roles);

  // 4. Get question
  const { question, preferredProvider } = req.body;
  if (!question?.trim()) return res.status(400).json({ error: 'question is required' });

  // 5. Build and run the agent
  const agent = await createAgent({ scope, preferredProvider });

  // 6. Run with audit logging
  await auditLog.write({
    eventId: requestId,
    eventType: 'api_request',
    actor: { userId: claims.sub, roles },
    request: { question: question.slice(0, 100) },
    scope: scope.split(' '),
  });

  try {
    const answer = await agent.run(question);

    await auditLog.write({
      eventId: requestId,
      eventType: 'api_response',
      actor: { userId: claims.sub },
      outcome: { success: true },
    });

    res.json({ answer, requestId });
  } catch (err) {
    await auditLog.write({
      eventId: requestId,
      eventType: 'api_error',
      actor: { userId: claims.sub },
      outcome: { success: false, error: err.message },
    });
    res.status(500).json({ error: 'Agent execution failed', requestId });
  } finally {
    await agent.close();
  }
});

const PORT = process.env.PORT ?? 3000;
app.listen(PORT, () => console.log(`Enterprise assistant listening on :${PORT}`));
Request flow diagram authenticate rate limit RBAC scope filter agent run audit log response dark
Request lifecycle: every request goes through 6 stages before the agent runs.

The six-stage pipeline (authenticate, rate limit, resolve roles, validate input, run agent, audit) is the same request lifecycle used by production API gateways at companies like Stripe and Shopify. Each stage can reject the request independently, and the audit log captures the outcome regardless of success or failure. This is what compliance teams actually review during security audits.

Notice that the agent is created fresh per request and closed in the finally block. This prevents one user’s MCP session state from leaking into another user’s query. It costs a bit more in connection overhead, but the isolation guarantee is worth it for a multi-tenant system.

RBAC Configuration

// gateway/rbac.js
const ROLE_SCOPES = {
  employee: 'knowledge:read',
  manager: 'knowledge:read actions:read',
  admin: 'knowledge:read knowledge:write actions:read actions:write',
};

const SCOPE_TOOLS = {
  'knowledge:read': ['search_knowledge', 'get_article', 'list_categories'],
  'knowledge:write': ['create_article', 'update_article', 'publish_article'],
  'actions:read': ['get_ticket', 'list_tickets', 'get_report'],
  'actions:write': ['create_ticket', 'update_ticket', 'trigger_alert'],
};

export function getScopeFromRoles(roles) {
  return [...new Set(roles.flatMap(r => (ROLE_SCOPES[r] ?? '').split(' ')).filter(Boolean))].join(' ');
}

export function getAllowedTools(scope, allTools) {
  const allowed = new Set(
    scope.split(' ').flatMap(s => SCOPE_TOOLS[s] ?? [])
  );
  return allTools.filter(t => allowed.has(t.name));
}

A misconfigured RBAC map is one of the most dangerous bugs in this system. If you accidentally give the employee role actions:write scope, every employee can trigger alerts and modify tickets through the AI assistant. Always test your scope mapping with unit tests, and consider adding a “dry run” mode that logs what a user would be allowed to do without actually executing anything.

Multi-Provider Agent Router

// agent/router.js - select provider based on question complexity
import { OpenAIProvider } from './providers/openai.js';
import { ClaudeProvider } from './providers/claude.js';
import { GeminiProvider } from './providers/gemini.js';
import { getAllowedTools } from '../gateway/rbac.js';

export async function createAgent({ scope, preferredProvider = 'auto' }) {
  // Load MCP servers
  const mcpClients = await connectMcpServers();
  const allTools = await aggregateTools(mcpClients);
  const scopedTools = getAllowedTools(scope, allTools);

  // Select provider
  const question = '';  // Provider selection is done at query time
  const providerKey = preferredProvider === 'auto'
    ? selectProvider(question)
    : preferredProvider;

  const Provider = { openai: OpenAIProvider, claude: ClaudeProvider, gemini: GeminiProvider }[providerKey];
  const provider = new Provider({ maxTurns: 12, tokenBudget: 50_000 });

  return {
    async run(question) {
      return provider.run(question, scopedTools, mcpClients);
    },
    async close() {
      await Promise.all(mcpClients.map(c => c.close()));
    },
  };
}

The multi-provider router gives you vendor resilience. If OpenAI has an outage, you can fall back to Claude or Gemini without changing any application code. In practice, teams also use this pattern for cost optimization – routing simple queries to cheaper models and complex analytical questions to more capable ones.

Deployment

services:
  gateway:
    build: .
    ports: ["3000:3000"]
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY}
      GEMINI_API_KEY: ${GEMINI_API_KEY}
      JWKS_URL: ${JWKS_URL}
      REDIS_URL: redis://redis:6379
    depends_on: [redis]
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:3000/health"]
      interval: 30s; timeout: 5s; retries: 3

  redis:
    image: redis:7-alpine
    volumes: ["redis-data:/data"]

volumes:
  redis-data:

The Docker Compose file gives you a single docker compose up to launch the entire stack locally. Redis handles both rate limiting state and session caching. For production, you would swap the single Redis container for a managed service (like AWS ElastiCache or GCP Memorystore) and add TLS termination in front of the gateway.

nJoy 😉

Lesson 53 of 55 (Capstone): Multi-API Integration Hub With MCP

Real-world AI assistants need to integrate many APIs: a CRM for customer data, a ticketing system for support requests, a payment processor for billing status, a calendar for scheduling. Each of these becomes an MCP server, and the multi-provider abstraction layer from Lesson 29 routes queries to the right provider. This capstone builds a multi-API integration hub that unifies five real-world APIs behind a single MCP interface, with tool routing, error handling, and a unified context window.

Multi-API hub architecture five MCP servers CRM ticketing payments calendar analytics unified gateway dark
Five MCP servers, one agent: the hub aggregates tools from all servers and routes calls automatically.

Project Architecture

mcp-api-hub/
├── servers/
│   ├── crm-server.js          (Customer data: search, get, update)
│   ├── tickets-server.js      (Support tickets: list, create, update)
│   ├── payments-server.js     (Billing: get_invoice, check_subscription)
│   ├── calendar-server.js     (Meetings: list, create, cancel)
│   └── analytics-server.js   (Metrics: get_report, get_trend)
├── agent/
│   └── hub-agent.js           (Multi-server MCP + OpenAI agent)
└── index.js

The key architectural decision here is one agent, many servers. Each API gets its own MCP server process, which means they are isolated – a crash in the payments server does not take down the CRM. It also means you can develop, test, and deploy each server independently, exactly like microservices.

The Multi-Server Agent

// agent/hub-agent.js
import OpenAI from 'openai';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';

const SERVER_CONFIGS = [
  { id: 'crm', command: 'node', args: ['./servers/crm-server.js'] },
  { id: 'tickets', command: 'node', args: ['./servers/tickets-server.js'] },
  { id: 'payments', command: 'node', args: ['./servers/payments-server.js'] },
  { id: 'calendar', command: 'node', args: ['./servers/calendar-server.js'] },
  { id: 'analytics', command: 'node', args: ['./servers/analytics-server.js'] },
];

export async function createHubAgent() {
  const openai = new OpenAI();
  const connections = new Map();
  const allTools = [];

  // Connect to all servers in parallel
  await Promise.all(SERVER_CONFIGS.map(async config => {
    const transport = new StdioClientTransport({ command: config.command, args: config.args, env: process.env });
    const client = new Client({ name: 'hub-agent', version: '1.0.0' });
    await client.connect(transport);
    connections.set(config.id, client);

    const { tools } = await client.listTools();
    for (const tool of tools) {
      allTools.push({
        serverId: config.id,
        tool,
        openaiFormat: {
          type: 'function',
          function: { name: tool.name, description: `[${config.id}] ${tool.description}`, parameters: tool.inputSchema, strict: true },
        },
      });
    }
  }));

  console.log(`Hub connected to ${connections.size} servers, ${allTools.length} tools total`);

  // Find which server owns a tool
  const toolIndex = new Map(allTools.map(t => [t.tool.name, t]));

  return {
    async query(userMessage) {
      const messages = [
        {
          role: 'system',
          content: `You are a comprehensive business assistant with access to CRM, ticketing, payments, calendar, and analytics systems.
Tools are prefixed with their system: [crm], [tickets], [payments], [calendar], [analytics].
When answering questions, use tools from multiple systems as needed to give a complete answer.
Always check multiple related systems when investigating customer issues.`,
        },
        { role: 'user', content: userMessage },
      ];

      const openaiTools = allTools.map(t => t.openaiFormat);
      let turns = 0;

      while (true) {
        const response = await openai.chat.completions.create({
          model: 'gpt-4o', messages, tools: openaiTools, tool_choice: 'auto',
          parallel_tool_calls: true,
        });
        const msg = response.choices[0].message;
        messages.push(msg);

        if (msg.finish_reason !== 'tool_calls') return msg.content;
        if (++turns > 15) throw new Error('Max turns exceeded');

        const results = await Promise.all(msg.tool_calls.map(async tc => {
          const entry = toolIndex.get(tc.function.name);
          if (!entry) {
            return { role: 'tool', tool_call_id: tc.id, content: `Tool '${tc.function.name}' not found` };
          }
          const client = connections.get(entry.serverId);
          const args = JSON.parse(tc.function.arguments);
          const result = await client.callTool({ name: tc.function.name, arguments: args });
          const text = result.content.filter(c => c.type === 'text').map(c => c.text).join('\n');
          return { role: 'tool', tool_call_id: tc.id, content: text };
        }));
        messages.push(...results);
      }
    },

    async close() {
      await Promise.all([...connections.values()].map(c => c.close()));
    },
  };
}
Multi-server query flow OpenAI calling tools from CRM tickets payments in parallel collecting results dark
Parallel tool calling: GPT-4o queries CRM, tickets, and payments simultaneously for a complete customer view.

The parallel_tool_calls: true flag is critical for performance. Without it, the model would call CRM, wait for the response, then call tickets, wait again, then call payments. With parallel calls, all three fire simultaneously and the total latency is the slowest server, not the sum of all servers. For customer-facing support bots, this can cut response time from 6 seconds to 2.

One thing that can go wrong here: tool name collisions. If both the CRM server and the tickets server expose a tool called search, the toolIndex map will silently overwrite one with the other. The description prefix ([crm], [tickets]) helps the model distinguish them, but the routing map needs unique names. Namespace your tool names (like crm_search, tickets_search) to avoid this.

Sample CRM Server (Condensed)

// servers/crm-server.js
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

const server = new McpServer({ name: 'crm-server', version: '1.0.0' });

server.tool('search_customers', {
  query: z.string().min(1).max(100),
  limit: z.number().int().min(1).max(20).default(10),
}, async ({ query, limit }) => {
  const customers = await crmApi.search(query, limit);
  return { content: [{ type: 'text', text: JSON.stringify(customers) }] };
});

server.tool('get_customer', {
  id: z.string().uuid(),
}, async ({ id }) => {
  const customer = await crmApi.getById(id);
  if (!customer) return { content: [{ type: 'text', text: 'Customer not found' }], isError: true };
  return { content: [{ type: 'text', text: JSON.stringify(customer) }] };
});

const transport = new StdioServerTransport();
await server.connect(transport);

Example Usage

const agent = await createHubAgent();

const answer = await agent.query(
  'Customer john.smith@acme.com says their subscription renewal failed last week. ' +
  'What is their account status, do they have any open support tickets, ' +
  'and what does their payment history look like?'
);
// Agent will call: search_customers, get_subscription, list_tickets, get_payment_history
// in parallel, then synthesize a complete answer

console.log(answer);
await agent.close();

This hub pattern is how enterprise support platforms like Zendesk and Intercom are building their AI assistants. A single user question like “why was this customer charged twice?” requires data from billing, CRM, and ticketing systems simultaneously. Without MCP’s standardized tool interface, you would need custom integration code for every API combination.

nJoy 😉

Lesson 52 of 55 (Capstone): Filesystem Agent With Claude and MCP

This capstone builds a filesystem agent powered by Claude 3.7 Sonnet. The agent can read files, search codebases, analyze code structure, and refactor files under user supervision. It applies the security patterns from Part VIII: roots for filesystem boundaries, tool safety for path validation, and confirmation-based elicitation for destructive file writes. The result is a safe, auditable codebase assistant that you can trust with your actual project files.

Filesystem agent architecture Claude MCP server file tools read search analyze write with roots boundary dark
Filesystem agent: Claude plans file operations, MCP server executes them within roots-defined boundaries.

The Filesystem MCP Server

// servers/fs-server.js
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
import fs from 'node:fs/promises';
import path from 'node:path';

const server = new McpServer({ name: 'fs-server', version: '1.0.0' });

// Get the allowed root from the client (via roots capability)
let allowedRoots = [];
server.server.onroots_list_changed = async () => {
  const { roots } = await server.server.listRoots();
  allowedRoots = roots.map(r => r.uri.replace('file://', ''));
};

// Path safety: ensure the path is within an allowed root
function validatePath(filePath) {
  const resolved = path.resolve(filePath);
  if (allowedRoots.length === 0) {
    throw new Error('No filesystem roots configured');
  }
  const isAllowed = allowedRoots.some(root => resolved.startsWith(path.resolve(root)));
  if (!isAllowed) {
    throw new Error(`Path '${resolved}' is outside allowed roots: ${allowedRoots.join(', ')}`);
  }
  return resolved;
}

// Tool: Read a file
server.tool('read_file', {
  path: z.string().min(1).max(512).refine(p => !p.includes('..'), 'Path traversal not allowed'),
}, async ({ path: filePath }) => {
  const safePath = validatePath(filePath);
  try {
    const content = await fs.readFile(safePath, 'utf8');
    const lines = content.split('\n').length;
    return { content: [{ type: 'text', text: `// ${safePath} (${lines} lines)\n${content}` }] };
  } catch (err) {
    return { content: [{ type: 'text', text: `Cannot read file: ${err.message}` }], isError: true };
  }
});

// Tool: List directory
server.tool('list_directory', {
  path: z.string().min(1).max(512),
  recursive: z.boolean().default(false),
}, async ({ path: dirPath, recursive }) => {
  const safePath = validatePath(dirPath);
  const entries = await listDir(safePath, recursive, 0, []);
  return { content: [{ type: 'text', text: entries.join('\n') }] };
});

async function listDir(dirPath, recursive, depth, results) {
  if (depth > 3) return results;  // Max 3 levels deep
  const entries = await fs.readdir(dirPath, { withFileTypes: true });
  for (const entry of entries) {
    if (entry.name.startsWith('.') || entry.name === 'node_modules') continue;
    const full = path.join(dirPath, entry.name);
    results.push(`${'  '.repeat(depth)}${entry.isDirectory() ? '[DIR] ' : ''}${entry.name}`);
    if (recursive && entry.isDirectory()) await listDir(full, recursive, depth + 1, results);
  }
  return results;
}

// Tool: Search for text in files
server.tool('search_files', {
  directory: z.string(),
  pattern: z.string().max(200),
  file_extension: z.string().optional(),
}, async ({ directory, pattern, file_extension }) => {
  const safePath = validatePath(directory);
  const regex = new RegExp(pattern, 'i');
  const matches = [];
  await searchFiles(safePath, regex, file_extension, matches);
  return { content: [{ type: 'text', text: matches.slice(0, 50).join('\n') || 'No matches found' }] };
});

// Tool: Write file (requires confirmation via elicitation)
server.tool('write_file', {
  path: z.string().min(1).max(512),
  content: z.string().max(100_000),
}, async ({ path: filePath, content }, context) => {
  const safePath = validatePath(filePath);

  // Check if file already exists
  const exists = await fs.access(safePath).then(() => true).catch(() => false);

  if (exists) {
    const confirm = await context.elicit(
      `This will overwrite '${safePath}'. Confirm?`,
      { type: 'object', properties: { confirm: { type: 'boolean' } } }
    );
    if (!confirm.content?.confirm) {
      return { content: [{ type: 'text', text: 'Write cancelled.' }] };
    }
  }

  await fs.mkdir(path.dirname(safePath), { recursive: true });
  await fs.writeFile(safePath, content, 'utf8');
  return { content: [{ type: 'text', text: `Written: ${safePath}` }] };
});

const transport = new StdioServerTransport();
await server.connect(transport);
Filesystem tools read_file list_directory search_files write_file with path validation roots check dark
Four filesystem tools with layered safety: roots validation, path sanitization, and elicitation for writes.

The layered validation here is worth studying. The Zod schema rejects path traversal (..) at the input level, validatePath enforces the roots boundary, and the write_file tool adds elicitation as a final gate. Each layer catches different attack vectors: malicious input, logic bugs, and unintended overwrites. Removing any single layer would leave a real gap.

If no roots are configured, every operation fails immediately. This is a deliberate fail-closed design. In production, you never want a misconfiguration to silently grant full filesystem access – it is far safer to break loudly than to expose /etc/passwd because someone forgot to set the project root.

The Claude Filesystem Agent

// agent/fs-agent.js
import Anthropic from '@anthropic-ai/sdk';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';

const anthropic = new Anthropic();

export async function createFilesystemAgent(projectRoot) {
  const transport = new StdioClientTransport({
    command: 'node',
    args: ['./servers/fs-server.js'],
    env: { ...process.env },
  });
  const mcp = new Client({
    name: 'fs-agent',
    version: '1.0.0',
    capabilities: { roots: { listChanged: true } },  // Declare roots support
  });
  await mcp.connect(transport);

  // Set the allowed root to the project directory
  // (roots are set by the client, enforced by the server)
  console.log(`Filesystem agent initialized. Root: ${projectRoot}`);

  const { tools: mcpTools } = await mcp.listTools();
  const tools = mcpTools.map(t => ({
    name: t.name, description: t.description, input_schema: t.inputSchema,
  }));

  return {
    async ask(question) {
      const messages = [{ role: 'user', content: question }];
      let turns = 0;

      while (true) {
        const response = await anthropic.messages.create({
          model: 'claude-3-7-sonnet-20250219',
          max_tokens: 4096,
          system: `You are a codebase assistant. The project root is ${projectRoot}.
Use read_file to examine files, list_directory to explore structure, search_files to find code.
Only use write_file when explicitly asked to modify files.`,
          messages,
          tools,
        });
        messages.push({ role: 'assistant', content: response.content });

        if (response.stop_reason !== 'tool_use') {
          return response.content.filter(b => b.type === 'text').map(b => b.text).join('');
        }

        if (++turns > 15) throw new Error('Max turns exceeded');

        const toolResults = await Promise.all(
          response.content.filter(b => b.type === 'tool_use').map(async block => {
            const result = await mcp.callTool({ name: block.name, arguments: block.input });
            const text = result.content.filter(c => c.type === 'text').map(c => c.text).join('\n');
            return { type: 'tool_result', tool_use_id: block.id, content: text };
          })
        );
        messages.push({ role: 'user', content: toolResults });
      }
    },
    async close() { await mcp.close(); },
  };
}

This agent pattern is the same one powering tools like Cursor, Windsurf, and Claude Code. A model reads your files, understands the structure, and proposes edits – but the human confirms destructive writes. The elicitation step in write_file is what separates a helpful assistant from a dangerous one.

One subtle risk: the search_files tool returns up to 50 matches, and large codebases could easily produce hundreds. If the model receives all 50 results in a single tool response, that burns a significant chunk of the context window. Consider adding pagination or relevance ranking if you deploy this against a large repository.

What to Extend

  • Add a run_tests tool that executes node --test and returns the output – the agent can then read failing test files and suggest fixes.
  • Add Claude’s extended thinking for architectural analysis queries (Lesson 21 pattern).
  • Add the prompt caching pattern from Lesson 23 to cache the system prompt for long analysis sessions.

nJoy 😉

Lesson 51 of 55 (Capstone): PostgreSQL Query Agent With OpenAI and MCP

This capstone project builds a complete, production-ready PostgreSQL query agent using OpenAI GPT-4o and MCP. By the end you will have a fully functional system where a user can ask questions in natural language and the agent translates them to safe, parameterized SQL queries, executes them against a real PostgreSQL database, formats the results, and explains its reasoning. The project incorporates lessons from throughout the course: schema validation, tool safety, audit logging, retry logic, and graceful shutdown.

PostgreSQL query agent architecture diagram OpenAI GPT-4o MCP server database tools natural language SQL dark
The database query agent: user asks a question, GPT-4o plans SQL queries, MCP tools execute them safely.

Project Structure

mcp-db-agent/
├── package.json         (type: module, node 22+)
├── .env                 (DATABASE_URL, OPENAI_API_KEY)
├── servers/
│   └── db-server.js     (MCP server with database tools)
├── agent/
│   └── query-agent.js   (OpenAI + MCP client loop)
├── lib/
│   ├── db.js            (PostgreSQL connection pool)
│   ├── audit.js         (Audit logger)
│   └── safety.js        (SQL safety checks)
└── index.js             (CLI entry point)

The MCP Database Server

// servers/db-server.js
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
import pg from 'pg';

const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });
const server = new McpServer({ name: 'db-server', version: '1.0.0' });

// Tool 1: List available tables
server.tool('list_tables', {}, async () => {
  const { rows } = await pool.query(
    "SELECT table_name, table_type FROM information_schema.tables WHERE table_schema = 'public' ORDER BY table_name"
  );
  return { content: [{ type: 'text', text: JSON.stringify(rows) }] };
});

// Tool 2: Describe a table's columns
server.tool('describe_table', {
  table_name: z.string().regex(/^[a-zA-Z_][a-zA-Z0-9_]*$/, 'Invalid table name'),
}, async ({ table_name }) => {
  const { rows } = await pool.query(
    'SELECT column_name, data_type, is_nullable, column_default FROM information_schema.columns WHERE table_schema = $1 AND table_name = $2 ORDER BY ordinal_position',
    ['public', table_name]
  );
  if (rows.length === 0) {
    return { content: [{ type: 'text', text: `Table '${table_name}' not found` }], isError: true };
  }
  return { content: [{ type: 'text', text: JSON.stringify(rows) }] };
});

// Tool 3: Execute a read-only query (SELECT only)
server.tool('execute_query', {
  sql: z.string().max(2000),
  params: z.array(z.union([z.string(), z.number(), z.null()])).max(20).default([]),
}, async ({ sql, params }) => {
  // Safety check: only allow SELECT statements
  const normalizedSql = sql.trim().toUpperCase();
  if (!normalizedSql.startsWith('SELECT') && !normalizedSql.startsWith('WITH')) {
    return { content: [{ type: 'text', text: 'Only SELECT queries are allowed' }], isError: true };
  }

  // Forbid dangerous keywords
  const dangerous = ['DROP', 'DELETE', 'UPDATE', 'INSERT', 'ALTER', 'TRUNCATE', 'GRANT', 'REVOKE'];
  if (dangerous.some(kw => normalizedSql.includes(kw))) {
    return { content: [{ type: 'text', text: 'Query contains forbidden keywords' }], isError: true };
  }

  try {
    const { rows, rowCount } = await pool.query(sql, params);
    return {
      content: [{ type: 'text', text: JSON.stringify({ rows: rows.slice(0, 100), total: rowCount, truncated: rowCount > 100 }) }],
    };
  } catch (err) {
    return { content: [{ type: 'text', text: `Query failed: ${err.message}` }], isError: true };
  }
});

// Tool 4: Get row count (for planning queries)
server.tool('count_rows', {
  table_name: z.string().regex(/^[a-zA-Z_][a-zA-Z0-9_]*$/),
  where_clause: z.string().max(500).optional(),
}, async ({ table_name, where_clause }) => {
  const sql = where_clause
    ? `SELECT COUNT(*) as count FROM ${table_name} WHERE ${where_clause}`
    : `SELECT COUNT(*) as count FROM ${table_name}`;
  const { rows } = await pool.query(sql);
  return { content: [{ type: 'text', text: JSON.stringify(rows[0]) }] };
});

const transport = new StdioServerTransport();
await server.connect(transport);
Four database MCP tools list_tables describe_table execute_query count_rows with safety validation dark
Four tools: schema discovery (list, describe), safe query execution, and row counting for query planning.

In practice, this four-tool design is intentional: it mirrors how a careful human analyst works. Rather than handing the model a single “run any SQL” tool, you force it through a discovery workflow – list tables, inspect columns, then query. This staged approach dramatically reduces hallucinated column names and malformed joins because the model sees the real schema before writing SQL.

Watch the safety check in execute_query closely. The keyword blocklist approach is simple but brittle – a query like SELECT * FROM updates would be rejected because “UPDATE” appears in the table name. In a production system, you would use a proper SQL parser or run queries through a read-only database user instead of string matching.

The OpenAI Query Agent

// agent/query-agent.js
import OpenAI from 'openai';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';

const openai = new OpenAI();

export async function createQueryAgent() {
  const transport = new StdioClientTransport({
    command: 'node',
    args: ['./servers/db-server.js'],
    env: { ...process.env },
  });
  const mcp = new Client({ name: 'query-agent', version: '1.0.0' });
  await mcp.connect(transport);
  const { tools: mcpTools } = await mcp.listTools();

  const tools = mcpTools.map(t => ({
    type: 'function',
    function: { name: t.name, description: t.description, parameters: t.inputSchema, strict: true },
  }));

  const SYSTEM_PROMPT = `You are a precise database analyst.
You have access to a PostgreSQL database. To answer questions:
1. First call list_tables to see available tables
2. Call describe_table for tables relevant to the question
3. Plan a safe SELECT query (use parameters for any user values)
4. Call execute_query with the query and parameters
5. Present results clearly with a brief interpretation

Always use parameterized queries. Never build SQL by string concatenation.
If a question cannot be answered with a SELECT, say so clearly.`;

  return {
    async query(userQuestion) {
      const messages = [
        { role: 'system', content: SYSTEM_PROMPT },
        { role: 'user', content: userQuestion },
      ];
      let turns = 0;

      while (true) {
        const response = await openai.chat.completions.create({
          model: 'gpt-4o', messages, tools, tool_choice: 'auto',
        });
        const msg = response.choices[0].message;
        messages.push(msg);

        if (msg.finish_reason !== 'tool_calls') {
          return msg.content;
        }

        if (++turns > 10) throw new Error('Max turns exceeded');

        const results = await Promise.all(msg.tool_calls.map(async tc => {
          const args = JSON.parse(tc.function.arguments);
          const result = await mcp.callTool({ name: tc.function.name, arguments: args });
          const text = result.content.filter(c => c.type === 'text').map(c => c.text).join('\n');
          return { role: 'tool', tool_call_id: tc.id, content: text };
        }));
        messages.push(...results);
      }
    },
    async close() { await mcp.close(); },
  };
}

The agent loop here follows the same pattern you have seen throughout the course, but notice the turn cap of 10. Without it, a confusing question could cause the model to loop indefinitely – calling tools, misinterpreting results, and calling more tools. In a billing-sensitive environment, a runaway loop like that translates directly into unexpected API costs.

Running the Agent

// index.js
import { createQueryAgent } from './agent/query-agent.js';
import readline from 'node:readline';

const agent = await createQueryAgent();
const rl = readline.createInterface({ input: process.stdin, output: process.stdout });

console.log('PostgreSQL Query Agent ready. Ask anything about your data.');
console.log('Type "exit" to quit.\n');

rl.on('line', async (line) => {
  if (line.trim() === 'exit') { await agent.close(); process.exit(0); }
  if (!line.trim()) return;
  try {
    const answer = await agent.query(line);
    console.log('\n' + answer + '\n');
  } catch (err) {
    console.error('Error:', err.message);
  }
});

Teams commonly deploy this exact pattern as an internal analytics bot on Slack or Teams. A support engineer asks “how many orders shipped last week from warehouse 3?” and gets an answer in seconds, without needing SQL skills or database access credentials. The read-only constraint means the bot is safe to hand to non-technical staff.

What to Extend

  • Add the audit logging middleware from Lesson 35 to log every execute_query call with the SQL, user, and result count.
  • Add a sample_rows tool that returns 3 rows from any table – helps the model understand data format before writing queries.
  • Connect it to your real production database with a read-only service account.

nJoy 😉

Lesson 50 of 55: Custom MCP Transports and Protocol Extensions in Node.js

The MCP SDK ships with two built-in transports: stdio and Streamable HTTP. These cover the vast majority of use cases. But sometimes you need something different: an in-process transport for testing, a WebSocket transport for browser environments, an IPC transport for Electron apps, or a transport that encrypts the JSON-RPC stream at the application layer. The SDK’s transport interface is deliberately minimal, making it straightforward to implement your own. This lesson covers the interface, two reference implementations, and practical extension points.

MCP custom transport interface diagram showing Transport interface implementations InProcess WebSocket IPC dark
The Transport interface is three methods: start, send, and close. Any communication channel can become an MCP transport.

The Transport Interface

// The MCP SDK Transport interface (TypeScript definition for reference)
// interface Transport {
//   start(): Promise;
//   send(message: JSONRPCMessage): Promise;
//   close(): Promise;
//   onmessage?: (message: JSONRPCMessage) => void;
//   onerror?: (error: Error) => void;
//   onclose?: () => void;
// }

// In JavaScript, implement the same shape:
class CustomTransport {
  onmessage = null;   // Called when a message is received
  onerror = null;     // Called on transport errors
  onclose = null;     // Called when the transport closes

  async start() {
    // Initialize the underlying communication channel
  }

  async send(message) {
    // Send a JSONRPCMessage object
  }

  async close() {
    // Clean up the channel
  }
}

The interface is intentionally minimal: three async methods and three event callbacks. This simplicity is the point. Any communication channel that can send and receive JSON objects – WebSockets, Unix domain sockets, shared memory, even a pair of browser MessageChannels – can become an MCP transport by implementing these six members.

In-Process Transport for Testing

An in-process transport connects a client directly to a server in the same Node.js process. Essential for integration tests without spawning subprocesses:

// in-process-transport.js

export function createInProcessTransport() {
  let clientTransport, serverTransport;

  clientTransport = {
    onmessage: null, onerror: null, onclose: null,
    async start() {},
    async send(msg) {
      // Route to server
      if (serverTransport.onmessage) serverTransport.onmessage(msg);
    },
    async close() {
      if (clientTransport.onclose) clientTransport.onclose();
      if (serverTransport.onclose) serverTransport.onclose();
    },
  };

  serverTransport = {
    onmessage: null, onerror: null, onclose: null,
    async start() {},
    async send(msg) {
      // Route to client
      if (clientTransport.onmessage) clientTransport.onmessage(msg);
    },
    async close() {
      if (clientTransport.onclose) clientTransport.onclose();
      if (serverTransport.onclose) serverTransport.onclose();
    },
  };

  return { clientTransport, serverTransport };
}

// Usage in tests:
import { test } from 'node:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { createInProcessTransport } from './in-process-transport.js';

test('in-process round trip', async (t) => {
  const { clientTransport, serverTransport } = createInProcessTransport();
  const server = buildServer();
  const client = new Client({ name: 'test', version: '1.0.0' });

  await server.connect(serverTransport);
  await client.connect(clientTransport);

  const { tools } = await client.listTools();
  assert.ok(tools.length > 0);

  await client.close();
});

This in-process transport eliminates the main pain point of MCP integration tests: subprocess management. No ports to allocate, no processes to spawn and kill, no race conditions between server startup and client connection. Tests using this pattern typically run 10-50x faster than their subprocess equivalents.

In-process transport diagram client and server connected directly in same process for testing no network dark
In-process transport: no network, no subprocess, instant round trip – ideal for unit and integration testing.

WebSocket Transport

npm install ws
// websocket-transport.js - client side
import WebSocket from 'ws';

export class WebSocketClientTransport {
  #url;
  #ws = null;
  onmessage = null;
  onerror = null;
  onclose = null;

  constructor(url) {
    this.#url = url;
  }

  async start() {
    return new Promise((resolve, reject) => {
      this.#ws = new WebSocket(this.#url);
      this.#ws.once('open', resolve);
      this.#ws.once('error', reject);
      this.#ws.on('message', (data) => {
        try {
          const msg = JSON.parse(data.toString());
          if (this.onmessage) this.onmessage(msg);
        } catch (err) {
          if (this.onerror) this.onerror(err);
        }
      });
      this.#ws.on('close', () => {
        if (this.onclose) this.onclose();
      });
      this.#ws.on('error', (err) => {
        if (this.onerror) this.onerror(err);
      });
    });
  }

  async send(message) {
    this.#ws.send(JSON.stringify(message));
  }

  async close() {
    this.#ws?.close();
  }
}

// WebSocket server transport
export class WebSocketServerTransport {
  #socket;
  onmessage = null;
  onerror = null;
  onclose = null;

  constructor(socket) {
    this.#socket = socket;
    socket.on('message', (data) => {
      try {
        const msg = JSON.parse(data.toString());
        if (this.onmessage) this.onmessage(msg);
      } catch (err) {
        if (this.onerror) this.onerror(err);
      }
    });
    socket.on('close', () => {
      if (this.onclose) this.onclose();
    });
  }

  async start() {}

  async send(message) {
    this.#socket.send(JSON.stringify(message));
  }

  async close() {
    this.#socket.close();
  }
}

// Server side: wrap ws.WebSocketServer
import { WebSocketServer } from 'ws';
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';

const wss = new WebSocketServer({ port: 9000 });
wss.on('connection', async (socket) => {
  const transport = new WebSocketServerTransport(socket);
  const server = buildMcpServer();
  await server.connect(transport);
});

WebSocket transport is the natural choice when your MCP client runs in a browser. Unlike Streamable HTTP, which requires the client to open new connections for each request, a WebSocket keeps a single persistent bidirectional channel open. The trade-off is that WebSocket connections are harder to load-balance (no standard sticky-session mechanism) and are not part of the official MCP spec, so you take on compatibility risk.

Protocol Extensions: Custom Methods

Beyond custom transports, MCP’s JSON-RPC foundation lets you add entirely new methods outside the spec. Prefixing them with your company namespace (like com.mycompany/) avoids collisions with future spec additions. This is useful for operational tooling – metrics, health checks, debug endpoints – that your internal clients need but that do not belong in the standard tool/resource model.

// MCP allows custom methods beyond the spec - they are prefixed with your namespace
// Use this for proprietary extensions that are specific to your deployment

// Server side: handle a custom method
server.server.setRequestHandler(
  { method: 'com.mycompany/getServerMetrics' },
  async (request) => {
    return {
      uptime: process.uptime(),
      activeSessions: sessionStore.size,
      memoryMB: Math.round(process.memoryUsage().heapUsed / 1024 / 1024),
    };
  }
);

// Client side: call a custom method
const metrics = await client.request(
  { method: 'com.mycompany/getServerMetrics', params: {} },
  /* ResultSchema */ undefined
);

One thing to watch out for with custom methods: they are invisible to standard MCP clients. If you add com.mycompany/getServerMetrics, only clients you control will know it exists. Standard MCP clients will not discover or call these methods via listTools, since they are not tools. Use them for internal operational purposes, not for functionality you expect third-party clients to use.

The extensions Capability Field

New in Draft – This feature is in the Draft spec and may be finalised in a future revision.

The Draft specification adds an extensions field to both ClientCapabilities and ServerCapabilities. This provides a standardised place to advertise optional protocol extensions beyond the core spec, replacing the ad-hoc approach of custom methods and namespaced capabilities.

// Server declaring support for a custom extension during initialization
{
  capabilities: {
    tools: {},
    resources: {},
    extensions: {
      'com.mycompany/streaming-progress': {
        version: '1.0.0',
      },
      'com.mycompany/team-collaboration': {
        version: '2.1.0',
      },
    },
  },
}

// Client checking for extension support
const serverCaps = client.getServerCapabilities();
if (serverCaps?.extensions?.['com.mycompany/streaming-progress']) {
  // Enable the streaming progress UI
}

The extensions field gives custom methods a discoverable surface. Instead of blindly calling com.mycompany/getServerMetrics and hoping it exists, a client can check capabilities.extensions during initialisation and adapt its behaviour. Namespace your extensions with a reverse-domain prefix (like Java packages) to avoid collisions with future spec additions or other vendors.

What to Build Next

  • Replace subprocess spawning in your integration tests with the in-process transport. Measure the test speedup.
  • If you have a browser-based MCP client, implement the WebSocket transport and test it against your existing MCP server with a WebSocket adapter.

nJoy 😉

Lesson 49 of 55: MCP Protocol Versioning, Compatibility, and Migration

The MCP specification evolves. New capabilities are added; some older mechanisms are deprecated; breaking changes occasionally ship. Building MCP servers that handle protocol version negotiation correctly means your clients and servers can interoperate across version boundaries without hard dependencies on a single spec revision. This lesson covers how MCP versioning works, how to negotiate capabilities with older clients, how to write migration guides when your own server schema changes, and the stability guarantees you can rely on from Anthropic.

MCP protocol versioning negotiation diagram client offering versions server selecting compatible version dark
MCP version negotiation: client offers supported versions, server selects the best match.

How MCP Protocol Versioning Works

MCP uses date-stamped version strings like 2024-11-05 or 2025-11-25. During initialization, the client sends the version it wants, and the server responds with the version it will use (typically the same, or the highest version both sides support). If they cannot agree, the connection fails at initialization.

// Initialization exchange (JSON-RPC)
// Client sends:
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-11-25",
    "clientInfo": { "name": "my-client", "version": "2.0.0" },
    "capabilities": { "sampling": {}, "elicitation": {} }
  }
}

// Server responds with the version it accepts:
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2025-11-25",
    "serverInfo": { "name": "my-server", "version": "1.5.0" },
    "capabilities": { "tools": {}, "resources": {}, "prompts": {} }
  }
}
// The @modelcontextprotocol/sdk handles version negotiation automatically
// You do not need to implement it manually

// To check the negotiated version in your server:
server.server.oninitialized = () => {
  const version = server.server.negotiatedProtocolVersion;
  console.log(`MCP session initialized with protocol version: ${version}`);
};

In practice, you rarely implement version negotiation by hand – the SDK handles it for you. The important thing is understanding what happens under the hood: if a client sends a version your server’s SDK does not support, the connection fails at initialization with a clear error. Logging the negotiated version on startup (as shown above) helps you quickly diagnose “why can’t this client connect?” issues in production.

Feature Detection (Capability Negotiation)

// Check if the connected client supports a specific capability
// before using it in your server code

server.server.oninitialized = () => {
  const clientCaps = server.server.getClientCapabilities();

  const supportsElicitation = !!clientCaps?.elicitation;
  const supportsSampling = !!clientCaps?.sampling;
  const supportsRoots = !!clientCaps?.roots;

  console.log(`Client capabilities: elicitation=${supportsElicitation} sampling=${supportsSampling} roots=${supportsRoots}`);

  if (!supportsElicitation) {
    // Fall back to returning instructions in tool result instead of interactive elicitation
    console.warn('Client does not support elicitation - using text fallback');
  }
};

This matters in real deployments because not all MCP clients are equal. Claude Desktop supports elicitation and sampling, but a custom CLI client you built might not. If your server blindly calls server.createElicitation() against a client that did not declare the capability, the request will fail. Checking capabilities first and providing a text-based fallback keeps your server compatible with the broadest range of clients.

Capability negotiation table client declares capabilities server checks before using elicitation sampling roots dark
Always check client capabilities before using server-initiated features like elicitation or sampling.

Migrating Your Tool Schema

When you change a tool’s input schema, existing clients that have cached the old schema will break. Follow a compatibility-first migration process:

// Backwards-compatible schema evolution: add optional fields, never remove required ones

// Version 1 schema (existing clients use this)
// search_products: { query: z.string(), limit: z.number().optional().default(10) }

// Version 2: add optional 'category' filter without breaking v1 clients
server.tool('search_products', {
  query: z.string(),
  limit: z.number().optional().default(10),
  category: z.string().optional(),           // New optional field - backwards compatible
  // NEVER remove or rename 'query' or 'limit' - that breaks v1 clients
  // NEVER make an optional field required - that also breaks v1 clients
}, handler);
// Breaking change strategy: add a versioned tool name during transition
// Phase 1: add new tool alongside old one
server.tool('search_products_v2', {
  query: z.string(),
  limit: z.number().optional().default(10),
  filters: z.object({  // New required field - would break v1 if added to original
    category: z.string().optional(),
    priceMax: z.number().optional(),
    inStock: z.boolean().optional().default(true),
  }),
}, handler);

// Phase 2: deprecate old tool via description
// server.tool('search_products', ... 
//   description: 'DEPRECATED: use search_products_v2 instead'

// Phase 3 (after client migration window): remove old tool

The biggest gotcha with schema migration is that LLM clients cache tool definitions. Even after you update the server, an agent might still send arguments matching the old schema until it re-fetches the tool list. Making new fields optional (or using versioned tool names) ensures that stale cached schemas do not cause hard failures during the transition window.

Version Compatibility Matrix

The MCP specification has gone through four published revisions. Each is backwards-incompatible with the previous, which is why the date changes. A Draft version tracks work-in-progress changes that have not yet shipped.

MCP Spec Version Status Key Features Added
2024-11-05 Final Initial release: tools, resources, prompts, sampling, stdio transport, HTTP+SSE transport
2025-03-26 Final OAuth 2.1 authorization framework, Streamable HTTP transport (replaces HTTP+SSE), tool annotations (destructiveHint, readOnlyHint, etc.), JSON-RPC batching, audio content type, completions capability
2025-06-18 Final Elicitation (server asks user for input), structured tool output, resource links in tool results, removed JSON-RPC batching, OAuth resource server classification (RFC 8707), MCP-Protocol-Version header required on HTTP, title field for human-friendly names
2025-11-25 Current Experimental tasks API (durable request tracking), OAuth Client ID Metadata Documents, tool calling in sampling requests, URL mode elicitation, enhanced authorization with incremental scope consent, icon metadata for tools/resources/prompts, OpenID Connect Discovery support, SSE polling
Draft Draft Work in progress: extensions field on capabilities, OpenTelemetry trace context propagation in _meta, SEP workflow formalisation. Do not target Draft in production.

The version jumps tell you something important: 2025-03-26 shipped tool annotations and a new transport. 2025-06-18 then removed JSON-RPC batching that 2025-03-26 had just added – proof that the spec is willing to walk back decisions quickly. Always check the changelog between your current version and the target version before upgrading.

Stability Guarantees

With four published spec revisions in roughly 18 months, a reasonable question is: what can I actually depend on? The list below separates the stable foundations from the parts that have already changed between versions.

  • JSON-RPC 2.0 wire format: Stable. Will not change between spec versions.
  • Core methods (initialize, tools/call, resources/read, prompts/get): Stable across all versions.
  • New capabilities: Always added as optional; never required for a functional server.
  • Removals: Features can be removed between versions (JSON-RPC batching was added in 2025-03-26 and removed in 2025-06-18). Pin your protocol version in production.
  • SDK APIs: The TypeScript/JavaScript SDK minor versions maintain backwards compatibility; only major versions may include breaking changes.

2026 Roadmap Priorities

2026 Roadmap (blog.modelcontextprotocol.io)

The MCP project published a 2026 roadmap organised around Working Group priorities rather than fixed release dates. The two highest-priority areas reflect production deployment needs:

  • Transport Evolution and Scalability: Addressing gaps in Streamable HTTP for production deployments. Focus areas include horizontal scaling without server-side state holding, standard session handling mechanisms, and a .well-known metadata format for server capability discovery. The goal is to keep the set of official transports small (a core MCP principle) while making them production-ready for enterprise-scale clusters.
  • Agent Communication: Expanding the experimental Tasks primitive with lifecycle improvements including retry semantics for transient failures, expiry policies for task results, and better integration with multi-agent orchestration patterns. This builds directly on the Tasks API introduced in 2025-11-25.

The shift from date-driven releases to Working Group-driven priorities signals that MCP is entering a production-hardening phase. For course readers: pin to 2025-11-25 in production, watch the roadmap for transport and tasks changes, and participate in Working Groups if you want to shape the next spec revision.

What to Build Next

  • Add a server://version resource to your MCP server that returns the current protocol version, SDK version, and your tool schema versions. Update it on every release.
  • Review your most-used tools for any fields that are currently optional but should be made required. Use the v2 naming strategy to transition safely.

nJoy 😉