Do Database Transactions Prevent Race Conditions

Do database transactions prevent race conditions?

TL/DR: Transactions do not inherently prevent all race conditions. You still need locking, abort-and-retry handling, or other protective measures in all real-world database implementations. Transactions are not a secret sauce you can add to your queries to make them safe from all concurrency effects.

Isolation

What you're getting at with your question is the I in ACID - isolation. The academically pure idea is that transactions should provide perfect isolation, so that the result is the same as if every transaction executed serially. In reality that's rarely the case in real RDBMS implementations; capabilities vary by implementation, and the rules can be weakened by use of a weaker isolation level like READ COMMITTED. In practice you cannot assume that transactions prevent all race conditions, even at SERIALIZABLE isolation.

Some RDBMSs have stronger abilities than others. For example, PostgreSQL 9.2 and newer have quite good SERIALIZABLE isolation that detects most (but not all) possible interactions between transactions and aborts all but one of the conflicting transactions. So it can run transactions in parallel quite safely.

Few, if any3, systems have truly perfect SERIALIZABLE isolation that prevents all possible races and anomalies, including issues like lock escalation and lock ordering deadlocks.

Even with strong isolation some systems (like PostgreSQL) will abort conflicting transactions, rather than making them wait and running them serially. Your app must remember what it was doing and re-try the transaction. So while the transaction has prevented concurrency-related anomalies from being stored to the DB, it's done so in a manner that is not transparent to the application.

Atomicity

Arguably the primary purpose of a database transaction is that it provides for atomic commit. The changes do not take effect until you commit the transaction. When you commit, the changes all take effect at the same instant as far as other transactions are concerned. No transaction can ever see just some of the changes a transaction makes1,2. Similarly, if you ROLLBACK, then none of the transaction's changes ever get seen by any other transaction; it's as if your transaction never existed.

That's the A in ACID.

Durability

Another is durability - the D in ACID. It specifies that when you commit a transaction it must truly be saved to storage that will survive a fault like power loss or a sudden reboot.

Consistency:

See wikipedia

Optimistic concurrency control

Rather than using locking and/or high isolation levels, it's common for ORMs like Hibernate, EclipseLink, etc to use optimistic concurrency control (often called "optimistic locking") to overcome the limitations of weaker isolation levels while preserving performance.

A key feature of this approach is that it lets you span work across multiple transactions, which is a big plus with systems that have high user counts and may have long delays between interactions with any given user.

References

In addition to the in-text links, see the PostgreSQL documentation chapter on locking, isolation and concurrency. Even if you're using a different RDBMS you'll learn a lot from the concepts it explains.


1I'm ignoring the rarely implemented READ UNCOMMITTED isolation level here for simplicity; it permits dirty reads.

2As @meriton points out, the corollary isn't necessarily true. Phantom reads occur in anything below SERIALIZABLE. One part of an in-progress transaction doesn't see some changes (by a not-yet-committed transaction), then the next part of the in-progress transaction does see the changes when the other transaction commits.

3 Well, IIRC SQLite2 does by virtue of locking the whole database when a write is attempted, but that's not what I'd call an ideal solution to concurrency issues.

Preventing Race Conditions Using Database Transactions (Laravel)

Laravel supports "pessimistic locking". For more information about that refer to the Laravel documentation on pessimistic locking.

Does a transaction stop all race condition problems in MySQL?


That is, if another request comes in almost precisely at the same time, and inserts another 20 records after step 2 above, but before step 4, will there be a race condition?

Yes, it will.

Records 21 to 40 will be locked by the transaction 2.

Transaction 1 will be blocked and wait until transaction 2 commits or rolls back.

If transaction 2 commits, then transaction 1 will update 40 records (including those inserted by transaction 2)

Database race conditions

This strategy works and known as 'optimistic locking'. Thats because you do your processing assuming it will succeed and only at the end actually check if it did succeed.

Of course you need a way to retry the transaction. And if the chances of failure are very high it might become inefficient. But in most cases it works just fine.

Will DbContextTransaction.BeginTransaction prevent this race condition

The database prevents the race condition by throwing a concurrency violation error in this case. So, I looked at how this is handled in the legacy code (following the suggestion by @sergey-l) and it uses a simple retry mechanism. So, I did the same:

private int ClaimNextPaymentNumber()
{
DbContextTransaction dbTransaction;
bool failed;
int paymentNumber = -1;

do
{
failed = false;

using(dbTransaction = db.Database.BeginTransaction())
{
try
{
paymentNumber = TryToClaimNextPaymentNumber();
}
catch(DbUpdateConcurrencyException ex)
{
failed = true;
ResetForClaimPaymentNumberRetry(ex);
}

dbTransaction.Commit();
concurrencyExceptionRetryCount = 0;
}
}
while(failed);

return paymentNumber;
}

How to deal with race condition in case when it's possible to have multiple servers (and each of them can have multiple threads)

After reading the majority of the comments let's assume that you need a solution for a relational database.

The main thing that you need to guarantee is that the write operation at the end of your code only happens if the precondition is still valid (e.g. product.Quantity - requestedQuantity).

This precondition is evaluated at the application side in memory. But the application only sees a snapshot of the data at the moment, when database read happened: _database.GetProduct(); This might become obsolete as soon as someone else is updating the same data. If you want to avoid using SERIALIZABLE as a transaction isolation level (which has performance implications anyway), the application should detect at the moment of writing if the precondition is still valid. Or said differently, if the data is unchanged while it was working on it.

This can be done by using offline concurrency patterns: Either an optimistic offline lock or a pessimistic offline lock. Many ORM frameworks support these features by default.

Dealing with race condition in transactional database table

The simplest solution is to lock some primary row (like the main "item") and use this as your distributed locking mechanism. (assuming your database supports row-level locks, as most modern dbs do).



Related Topics



Leave a reply



Submit