How to Remove the Decimal Point in a Pandas Dataframe

How to remove decimal points in pandas

You have a few options...

1) convert everything to integers.

df.astype(int)
<=35 >35
Cut-off
Calcium 0 1
Copper 1 0
Helium 0 8
Hydrogen 0 1

2) Use round:

>>> df.round()
<=35 >35
Cut-off
Calcium 0 1
Copper 1 0
Helium 0 8
Hydrogen 0 1

but not always great...

>>> (df - .2).round()
<=35 >35
Cut-off
Calcium -0 1
Copper 1 -0
Helium -0 8
Hydrogen -0 1

3) Change your display precision option in Pandas.

pd.set_option('precision', 0)

>>> df
<=35 >35
Cut-off
Calcium 0 1
Copper 1 0
Helium 0 8
Hydrogen 0 1

How to remove the decimal point in a Pandas DataFrame

You can try using as df['col'] = (df['col']*100).astype(int)
as below:

df = pd.DataFrame({'col': [1.10, 2.20, 3.30, 4.40]})
df['col'] = (df['col']*100).astype(int)
print(df)

Output:

   col
0 110
1 220
2 330
3 440

Remove decimal of columns in pandas data frame

Try the below:

m=(df.dtypes=='float')
df.loc[:,m]=df.loc[:,m].astype(int)
print(df)

query qstart qend name number strand
0 A 2 1064 None 0 +
1 B 2 1076 None 0 +
2 C 2 1064 None 0 +
3 D 0 741 None 0 +

Remove decimals fom pandas column(String type)

If values are strings first convert to floats and then to integers:

df['Net Sales'] = df['Net Sales'].astype(float).astype(int)

If values are floats use:

df['Net Sales'] = df['Net Sales'].astype(int)

Your solution should be changed with \d+ for match digits after .:

df['Net Sales'] = df['Net Sales'].astype(str).replace('\.\d+', '', regex=True).astype(int)
print (df)
Net Sales
0 123
1 34
2 65

Or youcan use split by dot and select first list by indexing:

df['Net Sales'] = df['Net Sales'].astype(str).str.split('.').str[0].astype(int)

Removing Decimal from a column extracted from a dataframe using pandas

Since df_A is a dataframe, you can fillna and then convert the column to int.

df_A['col1'] = df_A['col1'].fillna(0).astype(int) 

Since you are getting the error invalid literal for int() with base 10: with the above code, it means that there are some non-numeric values in your data which can not be converted to int. Use pd.to_numeric to coerce those values to NaN and then use the above code.

df_A['col1'] = pd.to_numeric(df_A['col1'], errors = 'coerce')
df_A['col1'] = df_A['col1'].fillna(0).astype(int)

Remove Decimal Point in a Dataframe with both Numbers and String Using Python

Use a function and apply to whole column:

In [94]:

df = pd.DataFrame({'Movies':['Save the last dance', '2012.0']})
df
Out[94]:
Movies
0 Save the last dance
1 2012.0

[2 rows x 1 columns]

In [95]:

def trim_fraction(text):
if '.0' in text:
return text[:text.rfind('.0')]
return text

df.Movies = df.Movies.apply(trim_fraction)

In [96]:

df
Out[96]:
Movies
0 Save the last dance
1 2012

[2 rows x 1 columns]

How to remove decimal point from string using pandas

Your question has nothing to do with Spark or PySpark. It's related to Pandas.

This is because Pandas interpret and infer columns' data type automatically. Since all the values of your column are numeric, Pandas will consider it as float data type.

To avoid this, pandas.ExcelFile.parse method accepts an argument called converters, you could use this to tell Pandas the specific column data type by:

# if you want one specific column as string
df = pd.concat([filepath_pd.parse(name, converters={'column_name': str}) for name in names])

OR

# if you want all columns as string
# and you have multi sheets and they do not have same columns
# this merge all sheets into one dataframe
def get_converters(excel_file, sheet_name, dt_cols):
cols = excel_file.parse(sheet_name).columns
converters = {col: str for col in cols if col not in dt_cols}
for col in dt_cols:
converters[col] = pd.to_datetime
return converters

df = pd.concat([filepath_pd.parse(name, converters=get_converters(filepath_pd, name, ['date_column'])) for name in names]).reset_index(drop=True)

OR

# if you want all columns as string
# and all your sheets have same columns
cols = filepath_pd.parse().columns
dt_cols = ['date_column']
converters = {col: str for col in cols if col not in dt_cols}
for col in dt_cols:
converters[col] = pd.to_datetime
df = pd.concat([filepath_pd.parse(name, converters=converters) for name in names]).reset_index(drop=True)

Remove decimals in Pandas column names

You can convert the type with .astype

In [312]: df.columns = df.columns.astype(int)

In [313]: df
Out[313]:
2006 2007 2008 2009
0 foo foo bar bar
1 foo foo bar bar

Or use .map and convert to string type.

In [338]: df.columns.map('{:g}'.format)
Out[338]: Index(['2006', '2007', '2008', '2009'], dtype='object')

In [319]: df.columns.map(int)
Out[319]: Int64Index([2006, 2007, 2008, 2009], dtype='int64')


Related Topics



Leave a reply



Submit