How to Concatenate/Append Multiple Spark Dataframes Column Wise in Pyspark

Pyspark : How to concat two dataframes in Pyspark

You need to perform a crossJoin between the two dataframes.
See below for details -

from pyspark.sql import Row

df1 = spark.createDataFrame([Row(NBB1 = 776)])
df1.show()
#Output
+----+
|NBB1|
+----+
| 776|
+----+

df2 = spark.createDataFrame([Row(NBB2 = 4867)])
df2.show()
#Output
+----+
|NBB2|
+----+
|4867|
+----+


df1.crossJoin(df2).show()
#Output
+----+----+
|NBB1|NBB2|
+----+----+
| 776|4867|
+----+----+

How to merge several dataframes column-wise in pyspark?

df_1 = spark.createDataFrame([[1, '2018-10-10', 3]], ['id', 'date', 'value'])
df_2 = spark.createDataFrame([[1, '2018-10-10', 3], [2, '2018-10-10', 4]], ['id', 'date', 'value'])
df_3 = spark.createDataFrame([[1, '2018-10-10', 3], [2, '2018-10-10', 4]], ['id', 'date', 'value'])

from functools import reduce

# list of data frames / tables
dfs = [df_1, df_2, df_3]

# rename value column
dfs_renamed = [df.selectExpr('id', 'date', f'value as value_{i}') for i, df in enumerate(dfs)]

# reduce the list of data frames with inner join
reduce(lambda x, y: x.join(y, ['id', 'date'], how='inner'), dfs_renamed).show()
+---+----------+-------+-------+-------+
| id| date|value_0|value_1|value_2|
+---+----------+-------+-------+-------+
| 1|2018-10-10| 3| 3| 3|
+---+----------+-------+-------+-------+

Pyspark -- How to left merge dataframes

You can apply join in pyspark as

df = df1.join(df2, df1.lkey==df2.rkey, 'left_outer')


Related Topics



Leave a reply



Submit