Database.What problems can arise with concurrent access using transactions?

Excellent question! When multiple transactions access the same data concurrently, various data anomalies can occur — especially if isolation levels are low.

These problems can cause inconsistent, incorrect, or even lost data if not handled properly.

🔍 Common Problems with Concurrent Transactions:


1. 🧼 Dirty Read

One transaction reads uncommitted changes made by another.

Example:

  • T1 updates balance = 500 → 600 but hasn’t committed.
  • T2 reads balance = 600.
  • T1 rolls back → T2 used invalid data.

🔒 Prevented by: READ COMMITTED and higher.

2. 🔁 Non-Repeatable Read

A row is read twice, and the value changes between reads within the same transaction.

Example:

  • T1 reads user email = alice@mail.com.
  • T2 updates it to alice@new.com and commits.
  • T1 reads again: value changed.

🔒 Prevented by: REPEATABLE READ and SERIALIZABLE.

3. 👻 Phantom Read

A query returns different sets of rows when run twice in the same transaction due to new rows being inserted.

Example:

  • T1: SELECT * FROM Orders WHERE total > 100;
  • T2: inserts a new matching order and commits.
  • T1 runs the same query again — now sees a new row.

🔒 Prevented by: SERIALIZABLE.

4. ❌ Lost Update

Two transactions read the same row, modify it, and write back, overwriting each other’s changes.

Example:

  • T1 reads balance = 500, adds 100 → sets to 600
  • T2 also reads 500, adds 200 → sets to 700
  • Final result = 700 ❌ (T1’s change is lost)

🔒 Prevented by: proper locking, SERIALIZABLE, or optimistic concurrency control.

🧠 Summary Table:

ProblemDescriptionPrevented by
Dirty ReadReading uncommitted dataREAD COMMITTED or higher
Non-Repeatable ReadSame row read twice gives different dataREPEATABLE READ or higher
Phantom ReadRe-executed query returns new rowsSERIALIZABLE
Lost UpdateConcurrent writes overwrite each otherLocks, optimistic concurrency
This entry was posted in Без рубрики. Bookmark the permalink.