Custom Serial/Autoincrement Per Group of Values

Custom SERIAL / autoincrement per group of values

Concept

There are at least several ways to approach this. First one that comes to my mind:

Assign a value for category_id column inside a trigger executed for each row, by overwriting the input value from INSERT statement.

Action

Here's the SQL Fiddle to see the code in action


For a simple test, I'm creating article table holding categories and their id's that should be unique for each category. I have omitted constraint creation - that's not relevant to present the point.

create table article ( id serial, category varchar, category_id int )

Inserting some values for two distinct categories using generate_series() function to have an auto-increment already in place.

insert into article(category, category_id)
select 'stackoverflow', i from generate_series(1,1) i
union all
select 'stackexchange', i from generate_series(1,3) i

Creating a trigger function, that would select MAX(category_id) and increment its value by 1 for a category we're inserting a row with and then overwrite the value right before moving on with the actual INSERT to table (BEFORE INSERT trigger takes care of that).

CREATE OR REPLACE FUNCTION category_increment()
RETURNS trigger
LANGUAGE plpgsql
AS
$$
DECLARE
v_category_inc int := 0;
BEGIN
SELECT MAX(category_id) + 1 INTO v_category_inc FROM article WHERE category = NEW.category;
IF v_category_inc is null THEN
NEW.category_id := 1;
ELSE
NEW.category_id := v_category_inc;
END IF;
RETURN NEW;
END;
$$

Using the function as a trigger.

CREATE TRIGGER trg_category_increment 
BEFORE INSERT ON article
FOR EACH ROW EXECUTE PROCEDURE category_increment()

Inserting some more values (post trigger appliance) for already existing categories and non-existing ones.

INSERT INTO article(category) VALUES 
('stackoverflow'),
('stackexchange'),
('nonexisting');

Query used to select data:

select category, category_id From article order by 1,2

Result for initial inserts:

category    category_id
stackexchange 1
stackexchange 2
stackexchange 3
stackoverflow 1

Result after final inserts:

category    category_id
nonexisting 1
stackexchange 1
stackexchange 2
stackexchange 3
stackexchange 4
stackoverflow 1
stackoverflow 2

How to add auto increment id according to a group in mysql

Try this:

update yourtable t1
join (
select
tt.indexer, @rowno := if(@grp = `group`, @rowno + 1, 1) as id, @grp := `group`
from (select * from yourtable order by `group`, indexer) tt
cross join (select @rowno := 0, @grp := null) t
) t2
on t1.indexer = t2.indexer
set t1.id = t2.id

Demo Here

Edited:

If you want to insert a new row, you have to do it like this:

insert into yourtable
select '$indexer', '$group', '$name', coalesce(max(id), 0) + 1
from yourtable
where name = '$name'

Auto-increment with Group BY

GROUP BY and variables don't necessarily work as expected. Just use a subquery:

SELECT (@i := @i + 1) AS Sno, c.*
FROM (SELECT c.ContractNo, c.SoftwareName, c.CompanyName, cu.InvoiceNo, cu.InvoiceDate,
cu.InvAmount, cu.InvoicePF, max(cu.InvoicePT) AS InvoicePeriodTo, cu.InvoiceRD, cu.ISD
FROM contract c JOIN
contractuser as b
ON c.ContractNo = cu.ContractNo
GROUP BY cu.ContractNo
ORDER BY c.SoftwareName ASC
) c CROSS JOIN
(SELECT @i := 0) params;

Notes:

  • I also fixed the JOIN syntax. Never use commas in the FROM clause.
  • I also added reasonable table aliases -- abbreviations for the tables. a and b don't mean anything, so they make the query harder to follow.
  • I left the GROUP BY with only one key. It should really have all the unaggregated keys but this is allowed under some circumstances.

Serial numbers per group of rows for compound key

Don't. It has been tried many times and it's a pain.

Use a plain serial or IDENTITY column:

  • Auto increment table column
CREATE TABLE address_history (
address_history_id serial PRIMARY KEY
, person_id int NOT NULL REFERENCES people(id)
, created_at timestamp NOT NULL DEFAULT current_timestamp
, previous_address text
);

Use the window function row_number() to get serial numbers without gaps per person_id. You could persist a VIEW that you can use as drop-in replacement for your table in queries to have those numbers ready:

CREATE VIEW address_history_nr AS
SELECT *, row_number() OVER (PARTITION BY person_id
ORDER BY address_history_id) AS adr_nr
FROM address_history;

See:

  • Gap-less sequence where multiple transactions with multiple tables are involved

Or you might want to ORDER BY something else. Maybe created_at? Better created_at, address_history_id to break possible ties. Related answer:

  • Column with alternate serials

Also, the data type you are looking for is timestamp or timestamptz, not datetime in Postgres:

  • Ignoring time zones altogether in Rails and PostgreSQL

And you only need to store previous_address (or more details), not address, nor original_address. Both would be redundant in a sane data model.

Autoincrement MySQL table conditional on other column values

I think I understand what you want. "MOVE NUMBER" should be one higher every time you insert a new move for a certain player "ID" and "Level". Calling this "autoincrementing" is somewhat misleading, because MySQL already has an AUTO_INCREMENT, which is something different.

Let's first get the last "MOVE NUMBER" for a "ID" = 1 and "Level" = 1:

SELECT MAX(`MOVE NUMBER`) FROM GameScore WHERE ID = 1 AND Level = 1;

For the last results in your question this should return 2. However, this could return NULL, so we do:

SELECT IFNULL(MAX(`MOVE NUMBER`), 0) FROM GameScore WHERE ID = 1 AND Level = 1;

then it will return 0.

Now all we need to do is insert a new "VALUE", for instance 463. This goes like this:

INSERT INTO GameScore (ID, 
LEVEL,
`MOVE NUMBER`,
VALUE)
SELECT 1,
1,
IFNULL(MAX(`MOVE NUMBER`), 0) + 1,
463
FROM GameScore
WHERE ID = 1 AND
Level = 1;

Please note that queries are untested, they are just given as examples.

how to add an ID with both string and auto incremented number in sql

This exactly my case was, and I have solved, see my answer there in similar question.

Let me try to answer it systematically as well.

create table my_seq(
min_value integer,
Max_value integer,
last_value integer,
increment_by tinyint,
type varchar)ENGINE = InnoDB;

Then create data values like,

insert into my_seq(min_value,max_value,last_value,increment_by,type) 
values(1,99999999,1,1,'foo#','DS'),(1,999999999,1,1,'foo#','MS'),(1,999999999,1,1,'foo#','DOC');

Make sure you have auto-commit=false.
Then, do it like this in your app, or database code,

//very import to begin the transaction
begin;
select CONCAT(type_val,'-',last_value) from my_seq where type_val=? FOR UPDATE;

Read the result in App or database procedure/trigger

update my_seq set last_number=last_number+1 where type_val=?;
commit;

Make sure there is index on type_val.

I believe this should work.

PostgreSQL: Auto-increment based on multi-column unique constraint

It would be nice if PostgreSQL supported incrementing "on a secondary column in a multiple-column index" like MySQL's MyISAM tables

Yeah, but note that in doing so, MyISAM locks your entire table. Which then makes it safe to find the biggest +1 without worrying about concurrent transactions.

In Postgres, you can do this too, and without locking the whole table. An advisory lock and a trigger will be good enough:

CREATE TYPE animal_grp AS ENUM ('fish','mammal','bird');

CREATE TABLE animals (
grp animal_grp NOT NULL,
id INT NOT NULL DEFAULT 0,
name varchar NOT NULL,
PRIMARY KEY (grp,id)
);

CREATE OR REPLACE FUNCTION animals_id_auto()
RETURNS trigger AS $$
DECLARE
_rel_id constant int := 'animals'::regclass::int;
_grp_id int;
BEGIN
_grp_id = array_length(enum_range(NULL, NEW.grp), 1);

-- Obtain an advisory lock on this table/group.
PERFORM pg_advisory_lock(_rel_id, _grp_id);

SELECT COALESCE(MAX(id) + 1, 1)
INTO NEW.id
FROM animals
WHERE grp = NEW.grp;

RETURN NEW;
END;
$$ LANGUAGE plpgsql STRICT;

CREATE TRIGGER animals_id_auto
BEFORE INSERT ON animals
FOR EACH ROW WHEN (NEW.id = 0)
EXECUTE PROCEDURE animals_id_auto();

CREATE OR REPLACE FUNCTION animals_id_auto_unlock()
RETURNS trigger AS $$
DECLARE
_rel_id constant int := 'animals'::regclass::int;
_grp_id int;
BEGIN
_grp_id = array_length(enum_range(NULL, NEW.grp), 1);

-- Release the lock.
PERFORM pg_advisory_unlock(_rel_id, _grp_id);

RETURN NEW;
END;
$$ LANGUAGE plpgsql STRICT;

CREATE TRIGGER animals_id_auto_unlock
AFTER INSERT ON animals
FOR EACH ROW
EXECUTE PROCEDURE animals_id_auto_unlock();

INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');

SELECT * FROM animals ORDER BY grp,id;

This yields:

  grp   | id |  name   
--------+----+---------
fish | 1 | lax
mammal | 1 | dog
mammal | 2 | cat
mammal | 3 | whale
bird | 1 | penguin
bird | 2 | ostrich
(6 rows)

There is one caveat. Advisory locks are held until released or until the session expires. If an error occurs during the transaction, the lock is kept around and you need to release it manually.

SELECT pg_advisory_unlock('animals'::regclass::int, i)
FROM generate_series(1, array_length(enum_range(NULL::animal_grp),1)) i;

In Postgres 9.1, you can discard the unlock trigger, and replace the pg_advisory_lock() call with pg_advisory_xact_lock(). That one is automatically held until and released at the end of the transaction.


On a separate note, I'd stick to using a good old sequence. That will make things faster -- even if it's not as pretty-looking when you look at the data.

Lastly, a unique sequence per (year, month) combo could also be obtained by adding an extra table, whose primary key is a serial, and whose (year, month) value has a unique constraint on it.

Sequential increment skipping numbers

serial columns, or IDENTITY in Postgres 10 or later, draw numbers from a SEQUENCE and gaps are to be expected. Their job is to make concurrent write access possible with unique numbers - not necessarily gap-less numbers.

If you don't actually have concurrent write access, there are simple ways to achieve (mostly) gap-less numbers. Like:

INSERT INTO tbl (info) 
SELECT 'xxx'
WHERE NOT EXISTS (SELECT FROM tbl WHERE info = 'xxx');

That doesn't burn a serial ID from the SEQUENCE because a duplicate insert is skipped. (The INSERT might still fail for any other reason - and burn a serial number. You could reset the SEQUENCE in such a case:

  • How to reset postgres' primary key sequence when it falls out of sync?

While inserting multiple rows in a single statement, you also have to rule out duplicates within the inserted set. Example code:

  • Return data from subselect used in INSERT in a Common Table Expression

But if you do have concurrent writes, none of the above works reliably, on principle. You better learn to accept gaps in the IDs. You can always have a query with row_number() OVER (ORDER BY id) to generate gap-less numbers after the fact. However, the numbers are still arbitrary to a degree. Smaller numbers were not necessarily committed earlier. There are exceptions under concurrent write load. Related:

  • Primary Key Value Not Incrementing Correctly
  • Serial numbers per group of rows for compound key
  • Auto increment table column

Or consider a UUID instead (dat type uuid) and avoid the inherent problem of duplicates with random values in a huge key space. Not at all serial, though:

  • Generating a UUID in Postgres for Insert statement?


Related Topics



Leave a reply



Submit