In Ms SQL Server, How to "Atomically" Increment a Column Being Used as a Counter

In MS SQL Server, is there a way to atomically increment a column being used as a counter?

Read Committed Snapshot only deals with locks on selecting data from tables.

In t1 and t2 however, you're UPDATEing the data, which is a different scenario.

When you UPDATE the counter you escalate to a write lock (on the row), preventing the other update from occurring. t2 could read, but t2 will block on its UPDATE until t1 is done, and t2 won't be able to commit before t1 (which is contrary to your timeline). Only one of the transactions will get to update the counter, therefore both will update the counter correctly given the code presented. (tested)

  • counter = 0
  • t1 update counter (counter => 1)
  • t2 update counter (blocked)
  • t1 commit (counter = 1)
  • t2 unblocked (can now update counter) (counter => 2)
  • t2 commit

Read Committed just means you can only read committed values, but it doesn't mean you have Repeatable Reads. Thus, if you use and depend on the counter variable, and intend to update it later, you're might be running the transactions at the wrong isolation level.

You can either use a repeatable read lock, or if you only sometimes will update the counter, you can do it yourself using an optimistic locking technique. e.g. a timestamp column with the counter table, or a conditional update.

DECLARE @CounterInitialValue INT
DECLARE @NewCounterValue INT
SELECT @CounterInitialValue = SELECT counter FROM MyTable WHERE MyID = 1234

-- do stuff with the counter value

UPDATE MyTable
SET counter = counter + 1
WHERE
MyID = 1234
AND
counter = @CounterInitialValue -- prevents the update if counter changed.

-- the value of counter must not change in this scenario.
-- so we rollback if the update affected no rows
IF( @@ROWCOUNT = 0 )
ROLLBACK

This devx article is informative, although it talks about the features while they were still in beta, so it may not be completely accurate.


update: As Justice indicates, if t2 is a nested transaction in t1, the semantics are different. Again, both would update counter correctly (+2) because from t2's perspective inside t1, counter was already updated once. The nested t2 has no access to what counter was before t1 updated it.

  • counter = 0
  • t1 update counter (counter => 1)
  • t2 update counter (nested transaction) (counter => 2)
  • t2 commit
  • t1 commit (counter = 2)

With a nested transaction, if t1 issues ROLLBACK after t1 COMMIT, counter returns to it's original value because it also undoes t2's commit.

Atomic increment of counter column using simple update

If you only ever use it as simple as this, you're fine.

The problems start when:

  • You add a condition - most conditions are fine, but avoid filtering based on Counter, that's a great way to lose determinism
  • You update inside of a transaction (careful about this - it's easy to be in a transaction outside of the scope of the actual update statement, even more so if you use e.g. TransactionScope)
  • You combine inserts and updates (e.g. the usual "insert if not exists" pattern) - this is not a problem if you only have a single counter, but for multiple counters it's easy to fall into this trap; not too hard to solve, unless you also have deletes, then it becomes a whole different league :)
  • Maybe if you rely on the value of Counter being a unique auto-incrementing identifier. It obviously doesn't work if you separate the select and update (and no, update based on select doesn't help - unlike plain update, the select isn't serialized with updates on the same row; that's where locking hints come in), I'm not sure if using output is safe.

And of course, things might be quite different if the transaction isolation level changes. This is actually a legitimate cause of errors, because SQL connection pooling doesn't reset the transaction isolation level, so if you ever change it, you need to make sure it can't ever affect any other SQL you execute on a SqlConnection taken out of the pool.

SQL atomic increment and locking strategies - is this safe?

UPDATE query places an update lock on the pages or records it reads.

When a decision is made whether to update the record, the lock is either lifted or promoted to the exclusive lock.

This means that in this scenario:

s1: read counter for image_id=15, get 0, store in temp1
s2: read counter for image_id=15, get 0, store in temp2
s1: write counter for image_id=15 to (temp1+1), which is 1
s2: write counter for image_id=15 to (temp2+1), which is also 1

s2 will wait until s1 decides whether to write the counter or not, and this scenario is in fact impossible.

It will be this:

s1: place an update lock on image_id = 15
s2: try to place an update lock on image_id = 15: QUEUED
s1: read counter for image_id=15, get 0, store in temp1
s1: promote the update lock to the exclusive lock
s1: write counter for image_id=15 to (temp1+1), which is 1
s1: commit: LOCK RELEASED
s2: place an update lock on image_id = 15
s2: read counter for image_id=15, get 1, store in temp2
s2: write counter for image_id=15 to (temp2+1), which is 2

Note that in InnoDB, DML queries do not lift the update locks from the records they read.

This means that in case of a full table scan, the records that were read but decided not to update, will still remain locked until the end of the transaction and cannot be updated from another transaction.

SQL Server : add row if doesn't exist, increment value of one column, atomic

Assuming you are on SQL Server, to make a single atomic statement you could use MERGE

MERGE YourTable AS target
USING (SELECT @ActionCode, @UserID) AS source (ActionCode, UserID)
ON (target.ActionCode = source.ActionCode AND target.UserID = source.UserID)
WHEN MATCHED THEN
UPDATE SET [Count] = target.[Count] + 1
WHEN NOT MATCHED THEN
INSERT (ActionCode, UserID, [Count])
VALUES (source.ActionCode, source.UserID, 1)
OUTPUT INSERTED.* INTO #MyTempTable;

UPDATE Use output to select the values if necessary. The code updated.

SQL Server - Auto-incrementation that allows UPDATE statements

A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying"

CREATE TABLE dbo.Sequence(
val int IDENTITY (10000, 1) /*Seed this at whatever your current max value is*/
)

GO

CREATE PROC dbo.GetSequence
@val AS int OUTPUT
AS
BEGIN TRAN
SAVE TRAN S1
INSERT INTO dbo.Sequence DEFAULT VALUES
SET @val=SCOPE_IDENTITY()
ROLLBACK TRAN S1 /*Rolls back just as far as the save point to prevent the
sequence table filling up. The id allocated won't be reused*/
COMMIT TRAN

Or another alternative from the same book that allocates ranges easier. (You would need to consider whether to call this from inside or outside your transaction - inside would block other concurrent transactions until the first one commits)

CREATE TABLE dbo.Sequence2(
val int
)

GO

INSERT INTO dbo.Sequence2 VALUES(10000);

GO

CREATE PROC dbo.GetSequence2
@val AS int OUTPUT,
@n as int =1
AS
UPDATE dbo.Sequence2
SET @val = val = val + @n;

SET @val = @val - @n + 1;

Calling a stored procedure in parallel to increase a counter and ensure atomic increments

Use OUTPUT

DECLARE @temp TABLE (MaxReached BIT NOT NULL);

UPDATE SomeCounters
SET CounterValue = (CounterValue + @AddValue),
MaxReached = CASE WHEN MaxValue = (CurrentValue + 1) THEN 1 ELSE 0
WHERE CounterId = @CounterId
AND MaxReached = 0
OUTPUT INSERTED.MaxReached INTO @temp

The update is atomic and you can then select the value out of the @temp table and do whatever you want with it. This way you'll be able to capture the exact update that caused MaxReached to be set to true (1).

Is a single SQL Server statement atomic and consistent?

I've been operating under the assumption that a single statement in SQL Server is consistent

That assumption is wrong. The following two transactions have identical locking semantics:

STATEMENT

BEGIN TRAN; STATEMENT; COMMIT

No difference at all. Single statements and auto-commits do not change anything.

So merging all logic into one statement does not help (if it does, it was by accident because the plan changed).

Let's fix the problem at hand. SERIALIZABLE will fix the inconsistency you are seeing because it guarantees that your transactions behave as if they executed single-threadedly. Equivalently, they behave as if they executed instantly.

You will be getting deadlocks. If you are ok with a retry loop, you're done at this point.

If you want to invest more time, apply locking hints to force exclusive access to the relevant data:

UPDATE Gifts  -- U-locked anyway
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts WITH (UPDLOCK, HOLDLOCK) --this normally just S-locks.
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)

You will now see reduced concurrency. That might be totally fine depending on your load.

The very nature of your problem makes achieving concurrency hard. If you require a solution for that we'd need to apply more invasive techniques.

You can simplify the UPDATE a bit:

WITH g AS (
SELECT TOP 1 Gifts.*
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
UPDATE g -- U-locked anyway
SET GivenAway = 1

This gets rid of one unnecessary join.



Related Topics



Leave a reply



Submit