Back to roadmap
Module 3 · Consistency & ReplicationDay 02730 min

Consensus: Paxos and Raft

How nodes agree on a single value despite failures.

Day 027

Consensus: Paxos and Raft

Leader
service
Follower
service
Follower
service
Follower
service
Signal path
Raft: leader, followers, log replication
Leader
service
AppendEntries
Follower
service
Leader
service
AppendEntries
Follower
service
Leader
service
AppendEntries
Follower
service
Memory hook

Consensus: Paxos and Raft: how nodes agree on a single value despite failures

Mental model

choose the failure mode you can explain

Design lens

5-node cluster tolerates 2 failures.

Recall anchors
RaftPaxosUse

Why it matters

Consensus protocols let a set of nodes agree on a value even with crashes and message loss. Raft is the more readable cousin of Paxos: elect a leader, replicate the log, and require a majority for every commit.

Deep dive

Raft: leader election (term + heartbeats), log replication (append + commit when majority acks), safety (majority overlaps).

Paxos: equivalent guarantees, harder to teach. Multi-Paxos optimizes for steady-state throughput.

Cost: every commit needs majority; latency floor across the slowest majority node.

Demo / scenario

etcd cluster of 5 nodes used for service config.

  1. Leader appends config change to its log.
  2. Followers ack; majority (3) reaches commit.
  3. Leader applies and acks client.
  4. If leader dies, election picks a new leader from up-to-date followers.

Tradeoffs

  • 5-node cluster tolerates 2 failures.
  • Cross-region clusters multiply commit latency.
  • Use consensus for small critical state, not bulk data.

Diagram

AppendEntriesAppendEntriesAppendEntries
Leader
Follower
Follower
Follower
Raft: leader, followers, log replication.

Mind map

Check yourself

Loading quiz…

Sources & further reading