Update Table Column With Different Random Numbers

How can I fill a column with random numbers in SQL? I get the same value in every row

Instead of rand(), use newid(), which is recalculated for each row in the result. The usual way is to use the modulo of the checksum. Note that checksum(newid()) can produce -2,147,483,648 and cause integer overflow on abs(), so we need to use modulo on the checksum return value before converting it to absolute value.

UPDATE CattleProds
SET SheepTherapy = abs(checksum(NewId()) % 10000)
WHERE SheepTherapy IS NULL

This generates a random number between 0 and 9999.

update table column with different random numbers

It's using the same random number because the subquery only needs to run once for the UPDATE. In other words, the SQL engine knows that the inner SELECT only needs to be run once for the query; it does so, and uses the resultant value for each row.

You actually don't need a subquery. This will do what you want:

UPDATE f1
SET col = ABS(300 + RANDOM() % 3600);

but if for some reason you really do want a subquery, you just need to make sure that it's dependent upon the rows in the table being updated. For example:

UPDATE f1
SET col = (SELECT (col*0) + ABS(300 + RANDOM() % 3600));

Update table with random numbers

Use RANDOM():

UPDATE yourTable
SET id_disease = FLOOR(RANDOM() * 30000079) + 1

Explanation

Postgres' RANDOM() function returns a number in the range 0.0 <= x < 1.0. In the query above, this means that the smallest value would occur when RANDOM() returns 0, giving 1 as the value. The highest value would occur when RANDOM() returns something like 0.999999, which would give a value slightly below 30000079. The FLOOR function would take it down to 30000078, but then we add 1 to it to bring it back to 30000079, your highest value.

How to update all rows in a column with random values?

demo: db<>fiddle

The simple one for your JSON is:

UPDATE 
data_records dr
SET
c2 = jsonb_set(dr.c2, '{variation}', to_jsonb(random()));

If you want the second column with the generate_series (for whatever) you will need something to join on the original table. generate_series could give you rows from 1 to 5. So to join on the data_records you would need a 1 to 5 column there too. If this is what is saved in c1 there's no problem. Simply join against c1.

But if not you have to generate it, maybe with a row_number window function which adds the row count as column. Then you are able to join the row count against the generated_series column and you have a row with a random value for each c1 and c2. One of them should be unique. This unique column (c1 in my case) works as the WHERE filter of the UPDATE clause. Of course this could be the c2. But if they are not unique you would end with same random values for same c1/c2 values:

UPDATE 
data_records dr
SET
c2 = jsonb_set(dr.c2, '{variation}', to_jsonb(rand.r))
FROM
(SELECT *, row_number() OVER () rn FROM data_records) dr_rn
LEFT JOIN
(SELECT generate_series(1, 5) gs , random() r) rand
ON dr_rn.rn = rand.gs

WHERE dr.c1 = dr_rn.c1;

It would be really more simple if you would have an unique id column. But nevertheless I don't see any reasons for making this that complicated.

Update table using different random values for each update row

I have seen this problem in other databases, where a subquery gets "optimized away" even though it has a volatile function in it. That may be happening here. One possibility is to remove the subquery:

update tab_ex
set val = val + random() * 2000
where id in (select id
from tab_ex
order by random()
limit (select count(*)*0.2 from tab_ex)
);

This should re-run random() for every row being updated.

SQL update column with random numbers from set list of values

One way to do it is with a cte. Join your @MyRandomVal1 to MyTable2 on true. Add a row number that is ordered by newid(). Then get all the rownumber 1's. You'll want to check the logic in the PARTITION BY. I didn't know if there was a column that was unique. If not you may have to partition by all columns since we are joining each row to every row in the random value table.

DECLARE @MyRandomVal1 Table (
id int identity (1,1),
val int not null)

INSERT INTO @MyRandomVal1 (val)
SELECT 15
union
SELECT 30
union
SELECT 45
union
SELECT 60
union
SELECT 90

;WITH cte AS (
SELECT
dbo.getautokey() AS AUTO_KEY
, dbo.GetAutoKey() AS E3_KEY
, [EMPID]
, [ENAME]
, ABS(checksum(NewId()) % 256) AS COLOR
, a.val
, ROW_NUMBER() OVER (PARTITION BY empid ORDER BY NEWID()) AS rn
FROM MyTable2
JOIN @MyRandomVal1 a ON 1 = 1
WHERE [JOBLEVEL]='SVP')

INSERT INTO MyTable (AUTO_KEY, E3_KEY, EMPID,ENAME, COLOR, ANGLE)
SELECT * FROM cte
WHERE rn = 1

Here's a simple DEMO since we don't have example data.

SQL query to update a column from another column randomly without any condition

First of all we must be sure that cardinality of TableB MUST BE greater than or equal to cardinality of TableA.

After the EDIT we assume that cardinality of TableB can also be smaller than cardinality of TableA, so we 1st need to multiply TableB rows enough to be more then TableA rows (tables Cnt and numbers) and then use the numbers table in TblB to multiply the rows.

In this mode, ELEMENT could be repeated across rows, but random part will be different.

We need to add a random info to TableB that will be used both to obfuscate ELEMENT and to shuffle the order. For this purpose we will add a UNIQUEIDENTIFIER column with random values from NEWID() function.

To shuffle the order we will number TableA.ID and the new random column in TableB
adding a sort of interface between them.

To obfuscate ELEMENT we will strip portions of our random info.

;with
Cnt as (
select cast(ceiling(1.0*CntA.cnt/CntB.cnt) as int) Btimes
from
(select count(*) cnt from TableA) CntA,
(select count(*) cnt from TableB) CntB
),
numbers as (
select top((select Btimes from cnt)) ROW_NUMBER() over (order by object_id) n
from sys.objects
),
TblB as (
SELECT *, convert(varbinary(16), NEWID(), 1) rndord
FROM TableB, numbers
),
TableAx as (
SELECT *, ROW_NUMBER() over (order by id) idx
FROM TableA
),
TableBx as (
SELECT *, ROW_NUMBER() over (order by rndord) idx
FROM TblB
)
select a.id, ELEMENT + ' ' +
cast((abs(convert(bigint, rndord)) % 9) as char(1))
+ char(65 + abs(convert(bigint, substring(rndord, 9, 4))) % (90-65))
+ char(65 + abs(convert(bigint, substring(rndord, 13, 4))) % (90-65)) postcode
from TableAx a
left join TableBx b on a.idx = b.idx

This should do the trick



Related Topics



Leave a reply



Submit