Why Do SQL Id Sequences Go Out of Sync (Specifically Using Postgres)

Why do SQL id sequences go out of sync (specifically using Postgres)?

Your application is probably occasionally setting the value for the primary key for a new row. Then postgresql has no need to get a new sequence and the sequence doesn't get updated.

Postgres Sequence out of sync

You are going to have to change your id generation method, because there is no Bucardo solution according to this comment in the FAQ.

Can Bucardo replicate DDL?

No, Bucardo relies on triggers, and Postgres does not yet provide DDL
triggers or triggers on its system tables.

Since Bucardo uses triggers, it cannot "see" the sequence changes, only the data in tables, which it replicates. Sequences are interesting objects that do not support triggers, but you can manually update them. I suppose you could add something like the code below before the INSERT, but there still might be issues.

SELECT setval('oauth_sessions_id_seq', (SELECT MAX(did) FROM distributors));

See this question for more information.

I am not fully up on all the issues involved, but you could perform the maximum calculation manually and do the insert operation in a re-try loop. I doubt it will work if you are actually doing inserts on both DBs and allowing Bucardo to replicate, but if you can guarantee that only one DB updates at a time, then you could try something like an UPSERT retry loop. See this post for more info. The "guts" of the loop might look like this:

INSERT INTO  distributors (did, dname) 
VALUES ((SELECT max(did)+1 FROM distributors), 'XYZ Widgets');

How to reset Postgres' primary key sequence when it falls out of sync?

-- Login to psql and run the following

-- What is the result?
SELECT MAX(id) FROM your_table;

-- Then run...
-- This should be higher than the last result.
SELECT nextval('your_table_id_seq');

-- If it's not higher... run this set the sequence last to your highest id.
-- (wise to run a quick pg_dump first...)

BEGIN;
-- protect against concurrent inserts while you update the counter
LOCK TABLE your_table IN EXCLUSIVE MODE;
-- Update the sequence
SELECT setval('your_table_id_seq', COALESCE((SELECT MAX(id)+1 FROM your_table), 1), false);
COMMIT;

Source - Ruby Forum

Why does Postgres sequence item go up even if object creation fails?

Yes, this the expected behaviour. See docs:

nextval

Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions execute nextval concurrently, each will safely receive a distinct sequence value.

If a sequence object has been created with default parameters, successive nextval calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE SEQUENCE command; see its command reference page for more information.

Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions might leave unused "holes" in the sequence of assigned values.

Note that nextval is normally set as a default value for a autoincrement/serial column.

Also try to imagine how hard and inefficient it would be if nextval were to rollback. Essentially you would have to lock every client on nextval until whole transaction (the one that acquired the lock) is processed. In that case forget about concurrent inserts.

If so it seems like that could cause some serious issues.

Like what? The issue in your case was that someone manually specified a value for an autoincrement column. You should never do that unless you are a samurai. :)

PL/pgSQL restart generated sequence

The simplest solution is to use truncate table . . . restart identity instead of delete:

truncate table employee restart identity;

Here is a db<>fiddle.

Truncate table is recommended for other reasons too. For instance, it reclaims the space the table used immediately. And it is much faster. The one difference is that delete triggers are not called (although your table doesn't have any triggers).

Hibernate use of PostgreSQL sequence does not affect sequence table

I had the same problem. It is related to the id allocating strategies of Hibernate. Whe n you choose GenerationType.SEQUENCE, Hibernate uses HiLo strategy which allocates IDs in blocks of 50 by default. So you can explicitly set allocationSize value like this:

@Id 
@SequenceGenerator(name="pk_sequence",sequenceName="entity_id_seq", allocationSize=1)
@GeneratedValue(strategy=GenerationType.SEQUENCE,generator="pk_sequence")
@Column(name="id", unique=true, nullable=false)
public int getId() {
return this.id;
}

Though, I've also heard opinions that using HiLo strategy with allocationSize=1 is not a good practice. Some people recommend to use GenerationType.AUTO instead when you have to deal with database-managed sequences

Update: I did end up going with allocationSize=1, and things seem to work as I expect now. My application is such that I don't really need blocks of IDs anyway, so YMMV.



Related Topics



Leave a reply



Submit