Excellent question! When multiple transactions access the same data concurrently, various data anomalies can occur — especially if isolation levels are low.
These problems can cause inconsistent, incorrect, or even lost data if not handled properly.
🔍 Common Problems with Concurrent Transactions:
1. 🧼 Dirty Read
One transaction reads uncommitted changes made by another.
Example:
- T1 updates
balance = 500 → 600
but hasn’t committed. - T2 reads
balance = 600
. - T1 rolls back → T2 used invalid data.
🔒 Prevented by: READ COMMITTED
and higher.
2. 🔁 Non-Repeatable Read
A row is read twice, and the value changes between reads within the same transaction.
Example:
- T1 reads user email =
alice@mail.com
. - T2 updates it to
alice@new.com
and commits. - T1 reads again: value changed.
🔒 Prevented by: REPEATABLE READ
and SERIALIZABLE
.
3. 👻 Phantom Read
A query returns different sets of rows when run twice in the same transaction due to new rows being inserted.
Example:
- T1:
SELECT * FROM Orders WHERE total > 100;
- T2: inserts a new matching order and commits.
- T1 runs the same query again — now sees a new row.
🔒 Prevented by: SERIALIZABLE
.
4. ❌ Lost Update
Two transactions read the same row, modify it, and write back, overwriting each other’s changes.
Example:
- T1 reads
balance = 500
, adds 100 → sets to 600 - T2 also reads 500, adds 200 → sets to 700
- Final result = 700 ❌ (T1’s change is lost)
🔒 Prevented by: proper locking, SERIALIZABLE
, or optimistic concurrency control.
🧠 Summary Table:
Problem | Description | Prevented by |
---|---|---|
Dirty Read | Reading uncommitted data | READ COMMITTED or higher |
Non-Repeatable Read | Same row read twice gives different data | REPEATABLE READ or higher |
Phantom Read | Re-executed query returns new rows | SERIALIZABLE |
Lost Update | Concurrent writes overwrite each other | Locks, optimistic concurrency |