What is pessimistic locking?
Pessimistic locking is a concurrency control strategy where:
You assume conflicts will happen, so you lock data before modifying it, preventing others from changing it until you’re done.
In short:
- “I lock first, then read and write.”
- Others must wait, fail, or timeout.
Why it’s called pessimistic
Because the system is pessimistic about concurrency:
- It assumes concurrent access will cause problems
- So it prevents it upfront, instead of detecting conflicts later
This is the opposite of optimistic locking.
How pessimistic locking works (step by step)
Let’s take a bank account example.
Scenario: withdraw money safely
BEGIN;
SELECT balance
FROM account
WHERE id = 1
FOR UPDATE;
UPDATE account
SET balance = balance - 100
WHERE id = 1;
COMMIT;
What happens internally
- Transaction starts
SELECT ... FOR UPDATE- Database places a row-level exclusive lock
- Other transactions:
- cannot UPDATE
- cannot DELETE
- may or may not READ (depends on isolation level)
- You safely compute and update state
COMMIT- Lock is released
- Next waiting transaction proceeds
✔ No lost updates
✔ Invariant preserved (balance >= 0)
What exactly gets locked?
Depends on the database, but typically:
- Row-level lock (most common)
- Sometimes gap locks / range locks
- Rarely table-level locks (bad for throughput)
In PostgreSQL:
FOR UPDATE→ exclusive row lockFOR SHARE→ shared read lock
What happens to other transactions?
If another transaction tries:
UPDATE account SET balance = balance - 50 WHERE id = 1;
It will:
- Wait until the first transaction commits or rolls back
- Or timeout
- Or fail immediately (NOWAIT / SKIP LOCKED)
Variants you should know (interview bonus)
1️⃣ Blocking (default)
SELECT ... FOR UPDATE;
Others wait
2️⃣ Fail fast
SELECT ... FOR UPDATE NOWAIT;
- If locked → error immediately
3️⃣ Skip locked rows (queues!)
SELECT *
FROM jobs
FOR UPDATE SKIP LOCKED
LIMIT 1;
Used in:
- Job queues
- Task schedulers
- Worker pools
What pessimistic locking is good for
✔ Money transfers
✔ Inventory updates
✔ Counters with invariants
✔ Stateful workflows
✔ Exactly-once financial logic
Basically:
Any place where correctness > throughput
What pessimistic locking is bad for
❌ High-throughput event ingestion
❌ Analytics writes
❌ Append-only logs
❌ Distributed retries
❌ Kafka consumers at scale
Locks don’t scale well under contention.
Pessimistic locking vs transactions (important!)
- Locking requires a transaction
- But a transaction does not imply locking
BEGIN;
UPDATE account SET balance = balance - 100;
COMMIT;
⚠ Without explicit locking:
- Two transactions can overwrite each other
- Result = lost updates
Typical interview mistakes ❌
- “Transaction is enough” → ❌
- “Locks are slow so don’t use them” → ❌
- “Upsert replaces locking” → ❌
- “Deadlocks won’t happen” → ❌ (they will)
Interview-ready short answer (say this)
Pessimistic locking is a concurrency control strategy where rows are explicitly locked before modification to prevent concurrent changes.
It works by acquiring row-level locks—typically viaSELECT … FOR UPDATE—inside a transaction, blocking other writers until the transaction completes.
It’s used when protecting business invariants like balances or inventory, where correctness is more important than throughput.