Pandas Read_Csv: Low_Memory and Dtype Options

Pandas read_csv: low_memory and dtype options

The deprecated low_memory option

The low_memory option is not properly deprecated, but it should be, since it does not actually do anything differently[source]

The reason you get this low_memory warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column.

Dtype Guessing (very bad)

Pandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value.

Consider the example of one file which has a column called user_id.
It contains 10 million rows where the user_id is always numbers.
Since pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file.

Specifying dtypes (should always be done)

adding

dtype={'user_id': int}

to the pd.read_csv() call will make pandas know when it starts reading the file, that this is only integers.

Also worth noting is that if the last line in the file would have "foobar" written in the user_id column, the loading would crash if the above dtype was specified.

Example of broken data that breaks when dtypes are defined

import pandas as pd
try:
from StringIO import StringIO
except ImportError:
from io import StringIO

csvdata = """user_id,username
1,Alice
3,Bob
foobar,Caesar"""
sio = StringIO(csvdata)
pd.read_csv(sio, dtype={"user_id": int, "username": "string"})

ValueError: invalid literal for long() with base 10: 'foobar'

dtypes are typically a numpy thing, read more about them here:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html

What dtypes exists?

We have access to numpy dtypes: float, int, bool, timedelta64[ns] and datetime64[ns]. Note that the numpy date/time dtypes are not time zone aware.

Pandas extends this set of dtypes with its own:

'datetime64[ns, <tz>]' Which is a time zone aware timestamp.

'category' which is essentially an enum (strings represented by integer keys to save

'period[]' Not to be confused with a timedelta, these objects are actually anchored to specific time periods

'Sparse', 'Sparse[int]', 'Sparse[float]' is for sparse data or 'Data that has a lot of holes in it' Instead of saving the NaN or None in the dataframe it omits the objects, saving space.

'Interval' is a topic of its own but its main use is for indexing. See more here

'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' are all pandas specific integers that are nullable, unlike the numpy variant.

'string' is a specific dtype for working with string data and gives access to the .str attribute on the series.

'boolean' is like the numpy 'bool' but it also supports missing data.

Read the complete reference here:

Pandas dtype reference

Gotchas, caveats, notes

Setting dtype=object will silence the above warning, but will not make it more memory efficient, only process efficient if anything.

Setting dtype=unicode will not do anything, since to numpy, a unicode is represented as object.

Usage of converters

@sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering 'foobar' in a column specified as int. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process.

CSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that pandas does not support. But this is a different story.

Specify dtype option on import or set low_memory=False

This solved my problem from here

dashboard_df = pd.read_csv(p_file, sep=',', error_bad_lines=False, index_col=False, dtype='unicode')

Could anyone explain this answer to me tough?

Columns (0,1,3) have mixed types.Specify dtype option on import or set low_memory=False. When importing csv File

An excel file ".xlsx" file has all sorts of formatting / xml code that pandas has to "mince" through to get the data (consider all of the features that are available to transform and visualize data in excel that are not available to be saved as .csv which drops all features automatically upon saving). A ".csv" file on the other hand is extremely raw (like a .txt file), so pandas doesn't have to mince through all of this extra crazy stuff to get the data.

From this helpful link: see what the code for an "xml" file looks like (which is what ".xlsx" format is based off)

Sample Image

Look at what pandas has to go through just to get the data "A1", "B1", etc. As such, you should always strive to pull data from a .csv file if it meets all of your requirements. Any data type formatting calculations, etc. should try to be handled in pandas. I am specifically talking about reading in data here.
Sample Image


In terms of why you were having issues, it is not possible to tell from your screenshots. A couple of things that can help in addition to trying to specify dtypes, low_memory, or parse_dates when reading:

df['numcol'] = pd.to_numeric(df['numcol'], errors='coerce')
df['datecol'] = pd.to_datetime(df['datecol'], errors='coerce')
df['datecol'] = pd.to_datetime(df['datecol'], dayfirst=True, errors='coerce') #UK / European dates

Pandas read_csv() gives DtypeWarning

Please give the below a shot. It might work well,

new_df = pd.read_csv('partial.csv', low_memory=False)


Related Topics



Leave a reply



Submit