Database VS Flat Text File: What Are Some Technical Reasons for Choosing One Over Another When Performance Isn't an Issue

Database vs Flat Text File: What are some technical reasons for choosing one over another when performance isn't an issue?

If you want to run reports on the data, or ask it questions later, a database is a logical choice, especially if you are storing multiple runs in the same database file to look for trends.

If you are only writing the logs from individual runs, and don't care about the data after you review it, then a database probably doesn't make sense.

What arguments to use to explain why SQL Server is far better than a flat file

  1. Data integrity. First, you can enforce it in a database and cannot in a flat file. Second, you can ensure you have referential integrity between different entities to prevent orphaning rows.

  2. Efficiency in storage depending on the nature of the data. If the data is naturally broken into entities, then a database will be more efficient than lots of flat files from the standpoint of the additional code that will need to be written in the case of flat files in order to join data.

  3. Native query capabilities. You can query against a database natively whereas you cannot with a flat file. With a flat file you have to load the file into some other environment (e.g. a C# application) and use its capabilities to query against it.

  4. Format integrity. The database format is more rigid which means more consistent. A flat file can easily change in a way that the code that reads the flat file(s) will break. The difference is related to #3. In a database, if the schema changes, you can still query against it using native tools. If the flat file format changes, you have to effectively do a search because the code that reads it will likely be broken.

  5. "Universal" language. SQL is somewhat ubiquitous where as the structure of the flat file is far more malleable.

database vs. flat files

  1. Databases can handle querying
    tasks, so you don't have to walk
    over files manually. Databases can
    handle very complicated queries.
  2. Databases can handle indexing tasks,
    so if tasks like get record with id
    = x can be VERY fast
  3. Databases can handle multiprocess/multithreaded access.
  4. Databases can handle access from
    network
  5. Databases can watch for data
    integrity
  6. Databases can update data easily
    (see 1) )
  7. Databases are reliable
  8. Databases can handle transactions
    and concurrent access
  9. Databases + ORMs let you manipulate
    data in very programmer friendly way.

What are the advantages and disadvantages of using a database over plain files?

A few database advantages:

  • Highly optimized (indexing, query optimization)
  • Stores many different types of data, generally with type-safety
  • Prebuilt abstractions (SQL, database access layers)
  • Relational integrity (foreign key constraints, etc)
  • ACID (Mostly having to do with data integrity, check Wikipedia...)
  • Interactive queries (for debugging, running ad-hoc reporting, etc)

Plain text doesn't have much except for the most dead-simple application

  • Can inspect on-disk format
  • Extremely simple in every way
  • No need for a server or linked library, etc.

Basically, if you are doing anything other than the most simple data manipulation (especially if you ever expect to have concurrent modifications, complex relations, multiple users, or even just a lot of data) it's well worth getting used to using a database. PostgreSQL is my favorite, although I'm sure you'll find conflicting views on that one :)

How can I make MySQL as fast as a flat file in this scenario?

Telling MySQL to ignore the primary (and only) index speeds both queries up.

For InnoDB it saves a second the queries. On MyISAM it keeps the query time consistently at the minimum time seen.

The cange is to add

ignore index(`PRIMARY`)   

after the tablename in the query.

EDIT:
I appreciate all the input but much of it was of the form "you shouldn't do this", "do something completely different", etc. None of it addressed the question at hand:

"So what's the best way I can have
MySQL behave like itself most of the
time, yet win over a flat file in the
above scenario?"

So far, the solution I have posted: use MyISAM and ignore the index, seems to be closest to flat file performance for this use case, while still giving me a database when I need the database.

Store data series in file or database if I want to do row level math operations?

"I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input."

This is the standard use case for a data warehouse star-schema design. Buy Kimball's The Data Warehouse Toolkit. Read (and understand) the star schema before doing anything else.

"What is the best way to store the data and manipulate?"

A Star Schema.

You can implement this as flat files (CSV is fine) or RDBMS. If you use flat files, you write simple loops to do the math. If you use an RDBMS you write simple SQL and simple loops.

"My main concern is speed/performance as the number of datasets grows"

Nothing is as fast as a flat file. Period. RDBMS is slower.

The RDBMS value proposition stems from SQL being a relatively simple way to specify SELECT SUM(), COUNT() FROM fact JOIN dimension WHERE filter GROUP BY dimension attribute. Python isn't as terse as SQL, but it's just as fast and just as flexible. Python competes against SQL.

"pitfalls/gotchas that I should be aware of?"

DB design. If you don't get the star schema and how to separate facts from dimensions, all approaches are doomed. Once you separate facts from dimensions, all approaches are approximately equal.

"What are the reasons why one should be chosen over another?"

RDBMS slow and flexible. Flat files fast and (sometimes) less flexible. Python levels the playing field.

"Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design?"

Star Schema: central fact table surrounded by dimension tables. Nothing beats it.

"Is there any project or framework out there to help with this type of task?"

Not really.

Which is more efficient: Multiple MySQL tables or one large table?

Multiple tables help in the following ways / cases:

(a) if different people are going to be developing applications involving different tables, it makes sense to split them.

(b) If you want to give different kind of authorities to different people for different part of the data collection, it may be more convenient to split them. (Of course, you can look at defining views and giving authorization on them appropriately).

(c) For moving data to different places, especially during development, it may make sense to use tables resulting in smaller file sizes.

(d) Smaller foot print may give comfort while you develop applications on specific data collection of a single entity.

(e) It is a possibility: what you thought as a single value data may turn out to be really multiple values in future. e.g. credit limit is a single value field as of now. But tomorrow, you may decide to change the values as (date from, date to, credit value). Split tables might come handy now.

My vote would be for multiple tables - with data appropriately split.

Good luck.

What are the pros and cons of parquet format compared to other formats?

I think the main difference I can describe relates to record oriented vs. column oriented formats. Record oriented formats are what we're all used to -- text files, delimited formats like CSV, TSV. AVRO is slightly cooler than those because it can change schema over time, e.g. adding or removing columns from a record. Other tricks of various formats (especially including compression) involve whether a format can be split -- that is, can you read a block of records from anywhere in the dataset and still know it's schema? But here's more detail on columnar formats like Parquet.

Parquet, and other columnar formats handle a common Hadoop situation very efficiently. It is common to have tables (datasets) having many more columns than you would expect in a well-designed relational database -- a hundred or two hundred columns is not unusual. This is so because we often use Hadoop as a place to denormalize data from relational formats -- yes, you get lots of repeated values and many tables all flattened into a single one. But it becomes much easier to query since all the joins are worked out. There are other advantages such as retaining state-in-time data. So anyway it's common to have a boatload of columns in a table.

Let's say there are 132 columns, and some of them are really long text fields, each different column one following the other and use up maybe 10K per record.

While querying these tables is easy with SQL standpoint, it's common that you'll want to get some range of records based on only a few of those hundred-plus columns. For example, you might want all of the records in February and March for customers with sales > $500.

To do this in a row format the query would need to scan every record of the dataset. Read the first row, parse the record into fields (columns) and get the date and sales columns, include it in your result if it satisfies the condition. Repeat. If you have 10 years (120 months) of history, you're reading every single record just to find 2 of those months. Of course this is a great opportunity to use a partition on year and month, but even so, you're reading and parsing 10K of each record/row for those two months just to find whether the customer's sales are > $500.

In a columnar format, each column (field) of a record is stored with others of its kind, spread all over many different blocks on the disk -- columns for year together, columns for month together, columns for customer employee handbook (or other long text), and all the others that make those records so huge all in their own separate place on the disk, and of course columns for sales together. Well heck, date and months are numbers, and so are sales -- they are just a few bytes. Wouldn't it be great if we only had to read a few bytes for each record to determine which records matched our query? Columnar storage to the rescue!

Even without partitions, scanning the small fields needed to satisfy our query is super-fast -- they are all in order by record, and all the same size, so the disk seeks over much less data checking for included records. No need to read through that employee handbook and other long text fields -- just ignore them. So, by grouping columns with each other, instead of rows, you can almost always scan less data. Win!

But wait, it gets better. If your query only needed to know those values and a few more (let's say 10 of the 132 columns) and didn't care about that employee handbook column, once it had picked the right records to return, it would now only have to go back to the 10 columns it needed to render the results, ignoring the other 122 of the 132 in our dataset. Again, we skip a lot of reading.

(Note: for this reason, columnar formats are a lousy choice when doing straight transformations, for example, if you're joining all of two tables into one big(ger) result set that you're saving as a new table, the sources are going to get scanned completely anyway, so there's not a lot of benefit in read performance, and because columnar formats need to remember more about the where stuff is, they use more memory than a similar row format).

One more benefit of columnar: data is spread around. To get a single record, you can have 132 workers each read (and write) data from/to 132 different places on 132 blocks of data. Yay for parallelization!

And now for the clincher: compression algorithms work much better when it can find repeating patterns. You could compress AABBBBBBCCCCCCCCCCCCCCCC as 2A6B16C but ABCABCBCBCBCCCCCCCCCCCCCC wouldn't get as small (well, actually, in this case it would, but trust me :-) ). So once again, less reading. And writing too.

So we read a lot less data to answer common queries, it's potentially faster to read and write in parallel, and compression tends to work much better.

Columnar is great when your input side is large, and your output is a filtered subset: from big to little is great. Not as beneficial when the input and outputs are about the same.

But in our case, Impala took our old Hive queries that ran in 5, 10, 20 or 30 minutes, and finished most in a few seconds or a minute.

Hope this helps answer at least part of your question!



Related Topics



Leave a reply



Submit