Second edition with machine learning, deep learning, LLMs & AI available now! Buy now

CAP theorem explained: consistency, availability and the partition problem

The most common summary of the CAP theorem goes something like this: a distributed system can only guarantee two of consistency, availability, and partition tolerance — pick any two. You see it on interview prep sites, system design guides, architecture blog posts. It is tidy, memorable, and wrong in two separate ways.

What is the CAP theorem?

The CAP theorem, stated by Eric Brewer in 2000 and formally proved by Gilbert and Lynch in 2002, says that a distributed system cannot simultaneously guarantee consistency, availability, and partition tolerance during a network failure. The three properties are:

  • Consistency — every read returns the most recently written value, or an error
  • Availability — every request to a live node gets a non-error response
  • Partition tolerance — the system continues operating despite network failures between nodes

First problem: partition tolerance isn’t a choice

Networks fail. Hardware fails. Cloud providers have incidents. “Partition tolerance” isn’t one of three equal options you get to weigh against the others — it’s the ground condition of operating a distributed system at all. A system that simply stops working when nodes can’t communicate isn’t a useful distributed system.

The real choice is what the system does when a partition occurs. The “pick any two” framing implies you might choose to skip partition tolerance and trade it for something nicer. You can’t.

What does “consistency” mean in the CAP theorem?

This is the second problem with the meme, and the more instructive one. The CAP theorem’s definition of consistency is specifically linearizability: every read must return the value of the most recent write, as if the entire system were a single node with a global clock. It’s the strongest consistency guarantee that exists.

Linearizability is a high bar. Most applications don’t need it. And the CAP theorem’s conclusion — that you can’t have both consistency and availability during a partition — only applies to linearizability. Weaker consistency models open up different options.

The three visualizers below show what actually happens under each model when a partition strikes a two-node system.

Linearizable consistency

Under linearizability, every read must return the latest write. During a partition, a node that can’t confirm it has the latest data must refuse to respond — otherwise it might return stale data, which violates the guarantee. The result is correctness at the cost of availability.


Node East’s logs show two rejections: a read error and a rejected withdrawal. That’s the linearizable trade-off made concrete. When the partition heals, both nodes agree on $20 and there is no question of incorrect data having been served. The account was temporarily inaccessible — it was never wrong.

Read-your-writes consistency

Read-your-writes is a weaker model that opens up new options. Nodes can serve reads from local state during a partition — so users stay available for reading. But writes are blocked: a node will not accept a change it cannot safely replicate. When the partition heals, there is nothing to reconcile.


Bob reads an old version of the shared document — v1 instead of v2. That’s a stale read, and it’s visible in Bob’s node log. But it’s a read inconsistency, not a write conflict. Bob’s attempt to save his own edits is rejected until the partition heals. When it does, Node East just catches up. There are no conflicting edits to resolve.

This is the sense in which read-your-writes “works around” the CAP theorem. It accepts a weaker-but-still-useful guarantee — stale reads during a partition — and in return gets read availability and clean reconciliation. The harder problem (two conflicting writes landing in isolation) cannot happen here.

Eventual consistency

Eventual consistency goes further: reads and writes are both accepted at any node during a partition. When the partition heals, nodes reconcile. Maximum availability throughout — at the cost of correctness.


Both node logs end with a confirmed purchase. One item, two confirmed orders. The stock count reconciles to zero just fine — the problem is that the system let two people buy the same last item. Someone’s order has to be cancelled after the fact.

This is the problem that “eventual consistency” papers over when it’s described as just “eventually consistent.” The system was available. It was also wrong in a way that requires manual remediation.

Eventual consistency is appropriate where conflicts are either harmless or easy to resolve: DNS propagation, social media like counts, shopping carts (add to cart is additive; you handle conflicts at checkout). It is not appropriate where the thing being counted cannot be oversold.

The spectrum in practice

Every step down the consistency scale trades a guarantee for availability.

  • Linearizable protects you from everything — at the cost of refusing requests during a partition.
  • Read-your-writes protects you from write conflicts but not stale reads.
  • Eventual consistency gives you maximum availability and leaves conflict resolution as your problem.

The useful question when evaluating a system is not “is it CP or AP?” but something more precise: what does this system promise when a write happens and a different node reads before the change has propagated? And what happens to conflicting writes if a partition occurs?

Which databases use which consistency model?

System Consistency model Notes
PostgreSQL (sync replication) Linearizable Rejects writes it cannot replicate
ZooKeeper Linearizable Zab consensus; majority quorum required
etcd Linearizable Raft-based; minority partition cannot write
Google Spanner Linearizable TrueTime atomic clocks for global ordering
MongoDB Configurable Majority-write with causal consistency by default
Cassandra Eventual (tunable) Tunable consistency levels; eventual by default
DynamoDB Eventual (tunable) Eventually consistent reads by default
CouchDB Eventual Multi-master replication; conflicts resolved on sync
DNS Eventual Propagation delay is intentional

When a system claims to be “consistent” in conversation, it could mean any point on this spectrum. The distributed systems chapter goes further: how linearizability is actually implemented through consensus protocols like Raft, how weaker consistency models work in production systems like Cassandra, and what “split brain” looks like when an AP system allows conflicting writes to isolated nodes — one of the nastier failure modes in distributed computing.