Atomically Set Serial Value When Committing Transaction

Atomically set SERIAL value when committing transaction

Postgres 9.5 introduced a new feature related to this problem: commit timestamps.

You just need to activate track_commit_timestamp in postgresql.conf (and restart!) to start tracking commit timestamps. Then you can query:

SELECT * FROM tbl
WHERE pg_xact_commit_timestamp(xmin) >= '2015-11-26 18:00:00+01';

Read the chapter "Commit timestamp tracking" in the Postgres Wiki.

Related utility functions in the manual.

Function volatility is only VOLATILE because transaction IDs (xid) can wrap around per definition. So you cannot create a functional index on it.

You could fake IMMUTABLE volatility in a function wrapper for applications in a limited time frame, but you need to be aware of implications. Related case with more explanation:

  • Does PostgreSQL support "accent insensitive" collations?
  • How do IMMUTABLE, STABLE and VOLATILE keywords effect behaviour of function?

For many use cases (like yours?) that are only interested in the sequence of commits (and not absolute time) it might be more efficient to work with xmin cast to bigint "directly" (xmin::text::bigint) instead of commit timestamps. (xid is an unsigned integer internally, the upper half that does not fit into a signed integer.) Again, be aware of limitations due to possible xid wraparound.

For the same reason, commit timestamps are not preserved indefinitely. For small to medium databases, xid wraparound hardly ever happens - but it will eventually if the cluster is live for long enough. Read the chapter "Preventing Transaction ID Wraparound Failures" in the manual for details.

Guarantee monotonicity of PostgreSQL serial column values by commit order

I know you wanted an automatic locking, I would advise against that, you might be able to use stored procedures or triggers for that, but what else is a stored procedure than code. Please see also my comment.

My solution would be to:

My team needs a serial column to increase monotonically with each commit.

Does that mean,

  • that the insertion of a value less then the maximum value already stored is not allowed?
  • gaps in the values of that column are not allowed

I suppose you use a sequence for creation of this value. Then immediately before the commit you should acquire a specific advisory lock see 13.3.4 and now do your insertion either use the sequence implicitly in your schema or explicitly by querying in your insertion. No other commit of a transaction trying to acquire the same lock can get between the locking and the commit so the insertion must be sequential. Doing the locking and incrementing at the end of the transaction helps in that, to keep the time short and prevent deadlocks. The lock will be released together with the commit and the next transaction may acquire it, and will get the next value of the sequence.

Can an autoincrement ID ever change from the mid-transaction value upon commit?

The implementation of generated id values usually involves incrementing a counter value in a short atomic operation. This value is then used for by the requesting transaction and even if that transaction would roll back, the reserved value will never be given back to the pool of free values. So in this light I dont think the situation described is very likely. Also, in pl/sql type of programs you really do need the generated value to be right in order to insert other dependent rows to child tables.

As for the people who are wanting time-ordered gapless id values: the sole purpose of autoincrement/surrogate key is to create an artificial identification for a row. It should have nothing to do with determining the order in which rows were created. There are far better ways to do this, for example using a creation timestamp.

Atomic UPDATE to increment integer in Postgresql

Yes, that is safe.

While one such statement is running, all other such statements are blocked on a lock. The lock will be released when the transaction completes, so keep your transactions short. On the other hand, you need to keep your transaction open until all your work is done, otherwise you might end up with gaps in your sequence.

That is why it is usually considered a bad idea to ask for gapless sequences.

Atomically fetch and increase sequence value in MySql

Have a look at the InnoDB table type and FOR UPDATE. An example similar to what you describe is in the MySQL manual here http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html

how to atomically claim a row or resource using UPDATE in mysql

UPDATE cars SET user = 'bob' WHERE id = 123 AND user IS NULL;

The update query returns the number of changed rows. If it has not updated any, you know the car has already been claimed by someone else.

Alternatively, you can use SELECT ... FOR UPDATE.

SQL atomic increment and locking strategies - is this safe?

UPDATE query places an update lock on the pages or records it reads.

When a decision is made whether to update the record, the lock is either lifted or promoted to the exclusive lock.

This means that in this scenario:

s1: read counter for image_id=15, get 0, store in temp1
s2: read counter for image_id=15, get 0, store in temp2
s1: write counter for image_id=15 to (temp1+1), which is 1
s2: write counter for image_id=15 to (temp2+1), which is also 1

s2 will wait until s1 decides whether to write the counter or not, and this scenario is in fact impossible.

It will be this:

s1: place an update lock on image_id = 15
s2: try to place an update lock on image_id = 15: QUEUED
s1: read counter for image_id=15, get 0, store in temp1
s1: promote the update lock to the exclusive lock
s1: write counter for image_id=15 to (temp1+1), which is 1
s1: commit: LOCK RELEASED
s2: place an update lock on image_id = 15
s2: read counter for image_id=15, get 1, store in temp2
s2: write counter for image_id=15 to (temp2+1), which is 2

Note that in InnoDB, DML queries do not lift the update locks from the records they read.

This means that in case of a full table scan, the records that were read but decided not to update, will still remain locked until the end of the transaction and cannot be updated from another transaction.

Making a sequence of SQL statements atomic

Need to wrap all you logic in a Begin Transaction, Commit Transaction. The update / insert does not really care if the data came from a join unless it somehow created a situation where it could not roll back the transaction but it would have to get real messy to create a situation like that. If 3. and 4. have complex logic you may be forced into a cursor or .NET (but you can do some pretty complex logic with regular queries).



Related Topics



Leave a reply



Submit