How to Increment Value in Postgres Update Statement on JSON Key

Postgresql: How to increase by 1 on a jsonb field

This is ugly but works. I'm just figuring this out now by readings the documentation, so there may very well be a better way of doing this.

Let's start with a simple table:

create table table1 (data jsonb);

Insert some JSON:

insert into table1 (data) values ('{"name": "example", "count": 0}');

Now, we want to update the value of the count key in the data column. Assuming that you have pg 9.5 or later, you can use the concatenation operator to merge two json (or jsonb) dictionaries, like this:

sandbox=# select data || '{"count": 1}' as data from table1;
data
---------------------------------
{"name": "example", "count": 1}

So we know how to update a JSON key. But in the above example I'm using a static value in the replacement, while we actually want "one more than the current value of count". We can use the concatenation operating with strings to build the necessary JSON:

sandbox=# select '{"count": ' || ((data->>'count')::int + 1) || '}' as count from table1;
count
--------------
{"count": 1}

Putting that together:

sandbox=# update table1 set data = data || ('{"count": ' || ((data->>'count')::int + 1) || '}')::jsonb ;
UPDATE 1

Which gets us:

sandbox=# select * from table1;
data
---------------------------------
{"name": "example", "count": 1}

How do I modify fields inside the new PostgreSQL JSON datatype?

Update: With PostgreSQL 9.5, there are some jsonb manipulation functionality within PostgreSQL itself (but none for json; casts are required to manipulate json values).

Merging 2 (or more) JSON objects (or concatenating arrays):

SELECT jsonb '{"a":1}' || jsonb '{"b":2}', -- will yield jsonb '{"a":1,"b":2}'
jsonb '["a",1]' || jsonb '["b",2]' -- will yield jsonb '["a",1,"b",2]'

So, setting a simple key can be done using:

SELECT jsonb '{"a":1}' || jsonb_build_object('<key>', '<value>')

Where <key> should be string, and <value> can be whatever type to_jsonb() accepts.

For setting a value deep in a JSON hierarchy, the jsonb_set() function can be used:

SELECT jsonb_set('{"a":[null,{"b":[]}]}', '{a,1,b,0}', jsonb '{"c":3}')
-- will yield jsonb '{"a":[null,{"b":[{"c":3}]}]}'

Full parameter list of jsonb_set():

jsonb_set(target         jsonb,
path text[],
new_value jsonb,
create_missing boolean default true)

path can contain JSON array indexes too & negative integers that appear there count from the end of JSON arrays. However, a non-existing, but positive JSON array index will append the element to the end of the array:

SELECT jsonb_set('{"a":[null,{"b":[1,2]}]}', '{a,1,b,1000}', jsonb '3', true)
-- will yield jsonb '{"a":[null,{"b":[1,2,3]}]}'

For inserting into JSON array (while preserving all of the original values), the jsonb_insert() function can be used (in 9.6+; this function only, in this section):

SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,b,0}', jsonb '2')
-- will yield jsonb '{"a":[null,{"b":[2,1]}]}', and
SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,b,0}', jsonb '2', true)
-- will yield jsonb '{"a":[null,{"b":[1,2]}]}'

Full parameter list of jsonb_insert():

jsonb_insert(target       jsonb,
path text[],
new_value jsonb,
insert_after boolean default false)

Again, negative integers that appear in path count from the end of JSON arrays.

So, f.ex. appending to an end of a JSON array can be done with:

SELECT jsonb_insert('{"a":[null,{"b":[1,2]}]}', '{a,1,b,-1}', jsonb '3', true)
-- will yield jsonb '{"a":[null,{"b":[1,2,3]}]}', and

However, this function is working slightly differently (than jsonb_set()) when the path in target is a JSON object's key. In that case, it will only add a new key-value pair for the JSON object when the key is not used. If it's used, it will raise an error:

SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,c}', jsonb '[2]')
-- will yield jsonb '{"a":[null,{"b":[1],"c":[2]}]}', but
SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,b}', jsonb '[2]')
-- will raise SQLSTATE 22023 (invalid_parameter_value): cannot replace existing key

Deleting a key (or an index) from a JSON object (or, from an array) can be done with the - operator:

SELECT jsonb '{"a":1,"b":2}' - 'a', -- will yield jsonb '{"b":2}'
jsonb '["a",1,"b",2]' - 1 -- will yield jsonb '["a","b",2]'

Deleting, from deep in a JSON hierarchy can be done with the #- operator:

SELECT '{"a":[null,{"b":[3.14]}]}' #- '{a,1,b,0}'
-- will yield jsonb '{"a":[null,{"b":[]}]}'

For 9.4, you can use a modified version of the original answer (below), but instead of aggregating a JSON string, you can aggregate into a json object directly with json_object_agg().

Original answer: It is possible (without plpython or plv8) in pure SQL too (but needs 9.3+, will not work with 9.2)

CREATE OR REPLACE FUNCTION "json_object_set_key"(
"json" json,
"key_to_set" TEXT,
"value_to_set" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT concat('{', string_agg(to_json("key") || ':' || "value", ','), '}')::json
FROM (SELECT *
FROM json_each("json")
WHERE "key" <> "key_to_set"
UNION ALL
SELECT "key_to_set", to_json("value_to_set")) AS "fields"
$function$;

SQLFiddle

Edit:

A version, which sets multiple keys & values:

CREATE OR REPLACE FUNCTION "json_object_set_keys"(
"json" json,
"keys_to_set" TEXT[],
"values_to_set" anyarray
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT concat('{', string_agg(to_json("key") || ':' || "value", ','), '}')::json
FROM (SELECT *
FROM json_each("json")
WHERE "key" <> ALL ("keys_to_set")
UNION ALL
SELECT DISTINCT ON ("keys_to_set"["index"])
"keys_to_set"["index"],
CASE
WHEN "values_to_set"["index"] IS NULL THEN 'null'::json
ELSE to_json("values_to_set"["index"])
END
FROM generate_subscripts("keys_to_set", 1) AS "keys"("index")
JOIN generate_subscripts("values_to_set", 1) AS "values"("index")
USING ("index")) AS "fields"
$function$;

Edit 2: as @ErwinBrandstetter noted these functions above works like a so-called UPSERT (updates a field if it exists, inserts if it does not exist). Here is a variant, which only UPDATE:

CREATE OR REPLACE FUNCTION "json_object_update_key"(
"json" json,
"key_to_set" TEXT,
"value_to_set" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT CASE
WHEN ("json" -> "key_to_set") IS NULL THEN "json"
ELSE (SELECT concat('{', string_agg(to_json("key") || ':' || "value", ','), '}')
FROM (SELECT *
FROM json_each("json")
WHERE "key" <> "key_to_set"
UNION ALL
SELECT "key_to_set", to_json("value_to_set")) AS "fields")::json
END
$function$;

Edit 3: Here is recursive variant, which can set (UPSERT) a leaf value (and uses the first function from this answer), located at a key-path (where keys can only refer to inner objects, inner arrays not supported):

CREATE OR REPLACE FUNCTION "json_object_set_path"(
"json" json,
"key_path" TEXT[],
"value_to_set" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT CASE COALESCE(array_length("key_path", 1), 0)
WHEN 0 THEN to_json("value_to_set")
WHEN 1 THEN "json_object_set_key"("json", "key_path"[l], "value_to_set")
ELSE "json_object_set_key"(
"json",
"key_path"[l],
"json_object_set_path"(
COALESCE(NULLIF(("json" -> "key_path"[l])::text, 'null'), '{}')::json,
"key_path"[l+1:u],
"value_to_set"
)
)
END
FROM array_lower("key_path", 1) l,
array_upper("key_path", 1) u
$function$;

Updated: Added function for replacing an existing json field's key by another given key. Can be in handy for updating data types in migrations or other scenarios like data structure amending.

CREATE OR REPLACE FUNCTION json_object_replace_key(
json_value json,
existing_key text,
desired_key text)
RETURNS json AS
$BODY$
SELECT COALESCE(
(
SELECT ('{' || string_agg(to_json(key) || ':' || value, ',') || '}')
FROM (
SELECT *
FROM json_each(json_value)
WHERE key <> existing_key
UNION ALL
SELECT desired_key, json_value -> existing_key
) AS "fields"
-- WHERE value IS NOT NULL (Actually not required as the string_agg with value's being null will "discard" that entry)

),
'{}'
)::json
$BODY$
LANGUAGE sql IMMUTABLE STRICT
COST 100;

Update: functions are compacted now.

Writing the value of certain keys of a json file to a postgresql table with sqlalchemy

First of all, your Person model has score field set to db.Integer, yet in the data you're getting, score is set to float. This will cause a Data MisMatch error.

Now, for the json part.

You import the load method from the json module so you're able to convert json objects to Python dicts.

from json import load

You open your file, preferably inside a context so the file gets automatically released once you're done with it.

with open('result.json') as my_file:
dicts = load(my_file)

Now dicts is a list of dict objects on which you can operate.
You initiate a new instance of your Person db.model and assign each of the values to the appropriate field.

for dictionary in dicts: #Remember, you have a list of dicts.
person = Person(app_id=dictionary['appId'],
title=dictionary['title'], score=dictionary['score'])
#Remember to set the score field to the appropriate datatype.
#i.e: db.Float
db.session.add(person) #Add your new person instance to the session.
db.session.commit() #Persists the user to your database.

Update:
Based from what I understood from your comment. You can set your db.Model as follows :

class Person(db.Model):
__tablename__ = 'test'
id = db.Column(db.Integer, autoincrement=True, primary_key=True)
app_id = db.Column(db.String(10), index=True)
title = db.Column(db.String(255))
score = db.Column(db.Float) #You're getting float values.

Now, you have the id set by your database and the app_id from the json file. And you always have the option to search by the former or the latter.

Update/Increment a single column on multiple rows at once

What you are telling the DB to do it update EVERY record in the table by incrementing its order field by one. So EVERY record will always have the same value. I beleive you are trying to set the latest record to 1 and the oldest record set to (no records+1).

So maybe you can try this:

set @count = (SELECT COUNT(ID) from favorits);
UPDATE favourits SET order = @count-ID+1

Now this is assuming that no records are deleted. In that case you would have to adjust for this and set the latest record to 1 and then increment the order value for each previous record sorted by ID.

So this would be the approach:

set @i=0;
set @Count=(SELECT COUNT(*) from favorits);

UPDATE favorits SET `order` = @Count-(@i:=@i+1)+1;

atomically increment in PostgreSQL hstore like $inc in MongoDB

MongoDB needs a $inc operator because:

  1. Atomic operations in MongoDB are difficult without specific low-level support.
  2. The interface isn't rich enough to express c = c + 1 without a special operator.

You just need to express c = c + 1 with hstores. This task is complicated a little because hstores use strings for both the keys and values and that leaves you with a mess of casting. I think you're stuck with something nasty like this:

update t
set h = h || hstore('my_counter', ((h -> 'my_counter')::integer + 1)::text)
where mykey = myval

The (h -> 'my_counter')::integer + 1 does the increment by extracting the value (h -> 'my_counter'), casting it to an integer, and adding one to it. Then you build a single element hstore with hstore('my_counter', ...) and an explicit ::text cast on the value to make sure PostgreSQL knows which function you want. Finally, you concatenate the new key-value onto the original hstore with h || hstore(...) to replace the old value.

If you don't want to use that somewhat nasty mess all the time then you could wrap it a simple function and say:

update t
set h = hstore_inc(h, 'my_counter', 1)
where ...

to hide the nastiness.

I'm sure there are other ways to do it (using the various to/from array functions perhaps) but the above should work.

JSONB Data Type Modification in Postgresql

The key insight is that with jsonb_each and jsonb_object_agg you can round-trip a JSON object in a subquery:

SELECT id, (
SELECT jsonb_object_agg(key, value)
FROM jsonb_each(cards)
) AS result
FROM test;

(online demo)

Now you can JOIN these key-value pairs against the jsonb_array_elements of your array input. Your colleague was close, but not quite right: it requires a full outer join, not just a left (or right) join to get all the desired object keys for your output, unless one of your inputs is a subset of the other.

SELECT id, (
SELECT jsonb_object_agg(COALESCE(obj_key, arr_value), …)
FROM jsonb_array_elements_text('["1", "2", "3"]') AS arr(arr_value)
FULL OUTER JOIN jsonb_each(cards) AS obj(obj_key, obj_value) ON obj_key = arr_value
) AS result
FROM test;

(online demo)

Now what's left is only the actual calculation and the conversion to an UPDATE statement:

UPDATE test
SET cards = (
SELECT jsonb_object_agg(
COALESCE(key, arr_value),
COALESCE(obj_value::int, 0) + (arr_value IS NOT NULL)::int
)
FROM jsonb_array_elements_text('["1", "2", "3"]') AS arr(arr_value)
FULL OUTER JOIN jsonb_each_text(cards) AS obj(key, obj_value) ON key = arr_value
);

(online demo)



Related Topics



Leave a reply



Submit