Find maximum value of a column and return the corresponding row values using Pandas
Assuming df
has a unique index, this gives the row with the maximum value:
In [34]: df.loc[df['Value'].idxmax()]
Out[34]:
Country US
Place Kansas
Value 894
Name: 7
Note that idxmax
returns index labels. So if the DataFrame has duplicates in the index, the label may not uniquely identify the row, so df.loc
may return more than one row.
Therefore, if df
does not have a unique index, you must make the index unique before proceeding as above. Depending on the DataFrame, sometimes you can use stack
or set_index
to make the index unique. Or, you can simply reset the index (so the rows become renumbered, starting at 0):
df = df.reset_index()
Find row where values for column is maximal in a pandas DataFrame
Use the pandas idxmax
function. It's straightforward:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use
numpy.argmax
, such asnumpy.argmax(df['A'])
-- it provides the same thing, and appears at least as fast asidxmax
in cursory observations.idxmax()
returns indices labels, not integers.Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd').
if you want the integer position of that label within the
Index
you have to get it manually (which can be tricky now that duplicate row labels are allowed).
HISTORICAL NOTES:
idxmax()
used to be calledargmax()
prior to 0.11argmax
was deprecated prior to 1.0.0 and removed entirely in 1.0.0- back as of Pandas 0.16,
argmax
used to exist and perform the same function (though appeared to run more slowly thanidxmax
). argmax
function returned the integer position within the index of the row location of the maximum element.- pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
For example, consider this toy DataFrame
with a duplicate row label:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
So here a naive use of idxmax
is not sufficient, whereas the old form of argmax
would correctly provide the positional location of the max row (in this case, position 9).
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax
can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.
Find the column name which has the maximum value for each row
You can use idxmax
with axis=1
to find the column with the greatest value on each row:
>>> df.idxmax(axis=1)
0 Communications
1 Business
2 Communications
3 Communications
4 Business
dtype: object
To create the new column 'Max', use df['Max'] = df.idxmax(axis=1)
.
To find the row index at which the maximum value occurs in each column, use df.idxmax()
(or equivalently df.idxmax(axis=0)
).
Pandas: find maximum value of column for specific id and date only
My understanding of what you want is that you want to create a new column with a mark 'x' side by side with the column 'Difference' for those rows with max values of 'Difference' with their corresponding groups. For this, you can use np.where()
to create the new column by:
df_1['max_entry'] = np.where(df_1['Difference'] == df_1.groupby(['Contract','Ref_Date'])['Difference'].transform('max'), 'x', ' ')
#Assuming df_1 has the following data before new codes:
print(df_1)
Contract Ref_Date Last_update flag Difference
0 1 2020-12-31 2020-12-27 0 -4
1 1 2021-01-31 2021-02-02 0 2
10 1 2021-02-28 2021-02-26 0 -2
3 1 2021-02-28 2021-03-03 0 3
# Run new codes:
df_1['max_entry'] = np.where(df_1['Difference'] == df_1.groupby(['Contract','Ref_Date'])['Difference'].transform('max'), 'x', ' ')
print(df_1)
Contract Ref_Date Last_update flag Difference max_entry
0 1 2020-12-31 2020-12-27 0 -4 x
1 1 2021-01-31 2021-02-02 0 2 x
10 1 2021-02-28 2021-02-26 0 -2
3 1 2021-02-28 2021-03-03 0 3 x
Here, np.where()
acts like an if-then-else statement to test if the condition in its first parameter is true. If yes, it will assign value in the second parameter (i.e. 'x') for the rows in the new column. Otherewise, it will assign value in the third parameter (i.e. ' ') for the rows in the new column.
The condition we test in the first parameter is whether a value in the column 'Difference' is equal to the max. in its corresponding group. The max value is found by df_1.groupby(['Contract','Ref_Date'])['Difference'].transform('max')
which uses .transform()
instead of .agg()
so that the result has the same size as your original 'Difference' column, without cutting out non max. values. By this, as you pointed out, can then still keep all entries.
Edit
A more concise way of coding is as follows:
df_1['max_entry'] = df_1.groupby(['Contract','Ref_Date'])['Difference'].transform(lambda x: np.where(x == x.max(), 'x', ' '))
Here, we put the np.where()
call inside the .transform()
call. Gut feeling is that this version of coding might be more efficient and execute faster since it does not need to repeatedly calculate the group max once for each row. Instead, it calculates the group max only once for each group.
However, with a time profiling with %%timeit for this version and the initial version of codes, we get contrary results:
%%timeit
df_1['max_entry'] = df_1.groupby(['Contract','Ref_Date'])['Difference'].transform(lambda x: np.where(x == x.max(), 'x', ' '))
2.65 ms ± 46.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
df_1['max_entry'] = np.where(df_1['Difference'] == df_1.groupby(['Contract','Ref_Date'])['Difference'].transform('max'), 'x', ' ')
1.92 ms ± 55.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
The initial version is about 38% faster. The reason for this unexpectedly result is because the initial version has the max()
function using the Pandas built-in function which has been optimized for fast ndarray() operation. However, the new concise version is using a custom lambda function which has not been optimized for system performance.
Hence, this concise version has the cost of slower execution. You can use this concise if your dataset is small. For big dataset, the initial version, though a bit more clumsy, is the recommended version.
Pandas: Find maximum value in row and retrieve it's column position
You can create a temporary column which shows which is the max column for each ID using idxmax
and perform it column-wise, (axis=1)
, using only the Col_
columns.
Then impute the missing Age with a grouped average on the new column, using fillna
and groupby.transform
:
df['max_col'] = df.filter(like='Col_').idxmax(axis=1)
df['Age_filled'] = round(df['Age'].fillna(df.groupby('max_col')['Age'].transform('mean')))
Prints:
ID Age Col_A Col_B Col_C max_col
0 1 20.0 1 5 3 Col_B
1 2 28.0 6 8 9 Col_C
2 3 25.0 5 6 7 Col_C
3 4 30.0 3 4 6 Col_C
4 5 NaN 6 2 1 Col_A
5 6 27.0 1 8 4 Col_B
For ID = 5
, there is no other ID which has the maximum value in Col_A. So for such occasions, it is still left np.nan
Get the name of the category corresponding to the maximum value of a column
You can use idxmax
in transform
and map the index to Player
column.
df['count_max'] = df.groupby('Team')['Minutes played'].transform('idxmax').map(df['Player'])
print(df)
Team Player Minutes played count_max
0 1 a 2 b
1 1 b 10 b
2 1 c 0 b
3 2 a 28 b
4 2 b 50 b
5 2 e 7 b
6 3 c 200 c
7 3 p 10 c
Selecting the row with the maximum value in a column in geopandas
check your type with print(df['columnName'].dtype)
and make sure it is numeric (i.e. integer, float ...). if it returns just object then use df['columnName'].astype(float)
instead
Try with - city_join.loc[city_join['pop'].astype(float).idxmax()]
if pop
column is object type
Or
You can convert the column to numeric first
city_join['pop'] = pd.to_numeric(city_join['pop'])
and run your code city_join.loc[city_join['pop'].idxmax()]
How to find single largest value from all rows and column array in Python and also show its row and column index
Use numpy.unravel_index
for indices and create DataFrame by constructor with indexing:
df = pd.DataFrame({'Exat0': [10, -20, 3, 2],
'Exat10': [20, -36, 4, 4],
'Exat20': [-30, -33, 8, 7],
'Exat30': [23, -38, 8, 6],
'Exat40': [28, 2, 34, 22],
'Exat50': [18, -10, 4, 20]}, index=[1000, 2536, 3562, 2561])
df.index.name='EleNo.'
print (df)
Exat0 Exat10 Exat20 Exat30 Exat40 Exat50
EleNo.
1000 10 20 -30 23 28 18
2536 -20 -36 -33 -38 2 -10
3562 3 4 8 8 34 4
2561 2 4 7 6 22 20
a = df.abs().values
r,c = np.unravel_index(a.argmax(), a.shape)
print (r, c)
1 3
df1 = pd.DataFrame(df.values[r, c],
columns=[df.columns.values[c]],
index=[df.index.values[r]])
df1.index.name='EleNo.'
print (df1)
Exat30
EleNo.
2536 -38
Another only pandas solution with DataFrame.abs
, DataFrame.stack
and indices of max value by Series.idxmax
:
r1, c1 = df.abs().stack().idxmax()
Last select by DataFrame.loc
:
df1 = df.loc[[r1], [c1]]
print (df1)
Exat30
EleNo.
2536 -38
EDIT:
df = pd.DataFrame({'Exat0': [10, -20, 3, 2],
'Exat10': [20, -36, 4, 4],
'Exat20': [-30, -33, 8, 7],
'Exat30': [23, -38, 8, 6],
'Exat40': [28, 2, 34, -38],
'Exat50': [18, -10, 4, 20]}, index=[1000, 2536, 3562, 2561])
df.index.name='EleNo.'
print (df)
Exat0 Exat10 Exat20 Exat30 Exat40 Exat50
EleNo.
1000 10 20 -30 23 28 18
2536 -20 -36 -33 -38 2 -10
3562 3 4 8 8 34 4
2561 2 4 7 6 -38 20
s = df.abs().stack()
mask = s == s.max()
df1 = df.stack()[mask].unstack()
print (df1)
Exat30 Exat40
EleNo.
2536 -38.0 NaN
2561 NaN -38.0
df2 = df.stack()[mask].reset_index()
df2.columns = ['EleNo.','cols','values']
print (df2)
EleNo. cols values
0 2536 Exat30 -38
1 2561 Exat40 -38
Pandas retrieve value in one column(s) corresponding to the maximum value in another
np.random.seed(0)
df = pd.DataFrame(np.random.randn(5, 3), columns=list('ABC'))
df
A B C
0 1.764052 0.400157 0.978738
1 2.240893 1.867558 -0.977278
2 0.950088 -0.151357 -0.103219
3 0.410599 0.144044 1.454274
4 0.761038 0.121675 0.443863
df.A.idxmax()
1
What you claim fails, seems to work for me:
df.at[df.A.idxmax(), 'B']
1.8675579901499675
Although, based on your explanation, you may instead want loc
, not at
:
df.loc[df.A.idxmax(), ['B', 'C']]
B 1.867558
C -0.977278
Name: 1, dtype: float64
Note: You may want to check that your index does not contain duplicate entries. This is one possible reason for failure.
Related Topics
How Can One Continuously Generate and Track Several Random Objects with a Time Delay in Pygame
How to Check If Directory Exists in Python
What Is :: (Double Colon) in Python When Subscripting Sequences
Why Is Tensorflow 2 Much Slower Than Tensorflow 1
Setting an Environment Variable in Virtualenv
How to Dynamically Load a Python Class
Is There Any Simple Way to Benchmark Python Script
Dropping Infinite Values from Dataframes in Pandas
Python: Change the Scripts Working Directory to the Script's Own Directory
Differencebetween Using Loc and Using Just Square Brackets to Filter for Columns in Pandas/Python
How to Add a New Column to a Spark Dataframe (Using Pyspark)
Python Request Post with Param Data
Pip or Pip3 to Install Packages for Python 3