Generating Rows Based on Column Value
Non-recursive way:
SELECT *
FROM tab t
CROSS APPLY (SELECT n
FROM (SELECT ROW_NUMBER() OVER(ORDER BY 1/0) AS n
FROM master..spt_values s1) AS sub
WHERE sub.n <= t.Quantity) AS s2(Series);
db<>fiddle demo
how to generate n rows based on a value in a column in Big Query?
Use Index.repeat
for new rows by DataFrame.loc
, remove no
column, get flag
columns and last create default RangeIndex
:
df1 = (df.loc[df.index.repeat(df['no'])]
.drop('no', axis=1)
.assign(flag=1)
.reset_index(drop=True))
print (df1)
id_1 id_2 flag
0 A 100 1
1 A 100 1
2 A 100 1
3 A 200 1
4 A 301 1
5 A 301 1
6 B 122 1
7 B 122 1
8 B 122 1
9 B 122 1
10 B 100 1
Need to generate n rows based on a value in a column
I'll assume
- MyRef etc is a column in TableA
- You have a numbers table
Something like:
SELECT * INTO #TableA
FROM
(
SELECT 1 AS ID, 3 AS QUANTITY, 'MyRef' AS refColumn
UNION ALL
SELECT 2, 2, 'AnotherRef'
) T
;WITH Nbrs ( Number ) AS (
SELECT 1 UNION ALL
SELECT 1 + Number FROM Nbrs WHERE Number < 99
)
SELECT
A.ID, A.refColumn + CAST(N.Number AS varchar(10))
FROM
#TableA A
JOIN
Nbrs N ON N.Number <= A.QUANTITY
Create new rows based on values in a column and assign these new rows a new value?
You can pivot_longer the columns that contain ID, then use if_else to set the LinScore equal to 1 for reference rows
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(tidyr)
df <- structure(list(Disease = "Acute Myeloid Leukemia", Joining.Mesh.ID = "D015470",
Company = "GSK", MoA = "VGF-B agonist", Reference.MeSH.ID = "D007951",
LinScore = 0.9625), row.names = c(NA, -1L), class = "data.frame")
df %>%
pivot_longer(cols = ends_with('.ID'),
values_to = 'mesh_id') %>%
relocate(mesh_id, .after = Disease) %>%
mutate(LinScore = if_else(grepl('Reference', name), 1, LinScore)) %>%
select(-name)
#> # A tibble: 2 x 5
#> Disease mesh_id Company MoA LinScore
#> <chr> <chr> <chr> <chr> <dbl>
#> 1 Acute Myeloid Leukemia D015470 GSK VGF-B agonist 0.962
#> 2 Acute Myeloid Leukemia D007951 GSK VGF-B agonist 1
Created on 2021-07-13 by the reprex package (v2.0.0)
Make Number of Rows Based on Column Values - Pandas/Python
You could use .explode()
df['index'] = df.apply(lambda row: list(range(row['index_start'], row['index_end']+1)), axis=1)
df.explode('index')
item index_start index_end index
0 A 1 3 1
0 A 1 3 2
0 A 1 3 3
1 B 4 7 4
1 B 4 7 5
1 B 4 7 6
1 B 4 7 7
SQL to transpose and create rows based on column values
SELECT
t.key,
phonetype,
phonenumber,
startdate,
enddate
FROM phone t
JOIN LATERAL (
VALUES
(
'Personal',t.Mobile1,t.M1_St_Date,t.M1_Exp_Date
),
(
'Office',t.Mobile2,t.M2_St_Date,t.M2_Exp_Date
)
) s(phonetype, phonenumber,startdate,enddate) ON TRUE
db fiddle
Related Topics
Using Table Variable with Sp_Executesql
How to Perform a Select Query in a Do Block
Rails Scope to Check If Association Does Not Exist
SQL Server as Statement Aliased Column Within Where Statement
Is There a Performance Difference Between Between and in with MySQL or in SQL in General
Why Do I Need to Explicitly Specify All Columns in a SQL "Group By" Clause - Why Not "Group by *"
Group by Values That Are in Sequence
Recursive Stored Functions in MySQL
Changing SQL Server Database Sorting
How to Get Max(Date) from Given Set of Data Grouped by Some Fields Using Pyspark
Creating a Composite Foreign Key in SQL Server 2008
SQL Use Case Statement in Where in Clause
Find All Rows with Null Value(S) in Any Column