How to Get Iostream to Perform Better

How to get IOStream to perform better?

Here is what I have gathered so far:

Buffering:

If by default the buffer is very small, increasing the buffer size can definitely improve the performance:

  • it reduces the number of HDD hits
  • it reduces the number of system calls

Buffer can be set by accessing the underlying streambuf implementation.

char Buffer[N];

std::ifstream file("file.txt");

file.rdbuf()->pubsetbuf(Buffer, N);
// the pointer reader by rdbuf is guaranteed
// to be non-null after successful constructor

Warning courtesy of @iavr: according to cppreference it is best to call pubsetbuf before opening the file. Various standard library implementations otherwise have different behaviors.

Locale Handling:

Locale can perform character conversion, filtering, and more clever tricks where numbers or dates are involved. They go through a complex system of dynamic dispatch and virtual calls, so removing them can help trimming down the penalty hit.

The default C locale is meant not to perform any conversion as well as being uniform across machines. It's a good default to use.

Synchronization:

I could not see any performance improvement using this facility.

One can access a global setting (static member of std::ios_base) using the sync_with_stdio static function.

Measurements:

Playing with this, I have toyed with a simple program, compiled using gcc 3.4.2 on SUSE 10p3 with -O2.

C : 7.76532e+06

C++: 1.0874e+07

Which represents a slowdown of about 20%... for the default code. Indeed tampering with the buffer (in either C or C++) or the synchronization parameters (C++) did not yield any improvement.

Results by others:

@Irfy on g++ 4.7.2-2ubuntu1, -O3, virtualized Ubuntu 11.10, 3.5.0-25-generic, x86_64, enough ram/cpu, 196MB of several "find / >> largefile.txt" runs

C : 634572
C++: 473222

C++ 25% faster

@Matteo Italia on g++ 4.4.5, -O3, Ubuntu Linux 10.10 x86_64 with a random 180 MB file

C : 910390

C++: 776016

C++ 17% faster

@Bogatyr on g++ i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), mac mini, 4GB ram, idle except for this test with a 168MB datafile

C : 4.34151e+06

C++: 9.14476e+06

C++ 111% slower

@Asu on clang++ 3.8.0-2ubuntu4, Kubuntu 16.04 Linux 4.8-rc3, 8GB ram, i5 Haswell, Crucial SSD, 88MB datafile (tar.xz archive)

C : 270895
C++: 162799

C++ 66% faster

So the answer is: it's a quality of implementation issue, and really depends on the platform :/

The code in full here for those interested in benchmarking:

#include <fstream>
#include <iostream>
#include <iomanip>

#include <cmath>
#include <cstdio>

#include <sys/time.h>

template <typename Func>
double benchmark(Func f, size_t iterations)
{
f();

timeval a, b;
gettimeofday(&a, 0);
for (; iterations --> 0;)
{
f();
}
gettimeofday(&b, 0);
return (b.tv_sec * (unsigned int)1e6 + b.tv_usec) -
(a.tv_sec * (unsigned int)1e6 + a.tv_usec);
}


struct CRead
{
CRead(char const* filename): _filename(filename) {}

void operator()() {
FILE* file = fopen(_filename, "r");

int count = 0;
while ( fscanf(file,"%s", _buffer) == 1 ) { ++count; }

fclose(file);
}

char const* _filename;
char _buffer[1024];
};

struct CppRead
{
CppRead(char const* filename): _filename(filename), _buffer() {}

enum { BufferSize = 16184 };

void operator()() {
std::ifstream file(_filename, std::ifstream::in);

// comment to remove extended buffer
file.rdbuf()->pubsetbuf(_buffer, BufferSize);

int count = 0;
std::string s;
while ( file >> s ) { ++count; }
}

char const* _filename;
char _buffer[BufferSize];
};


int main(int argc, char* argv[])
{
size_t iterations = 1;
if (argc > 1) { iterations = atoi(argv[1]); }

char const* oldLocale = setlocale(LC_ALL,"C");
if (strcmp(oldLocale, "C") != 0) {
std::cout << "Replaced old locale '" << oldLocale << "' by 'C'\n";
}

char const* filename = "largefile.txt";

CRead cread(filename);
CppRead cppread(filename);

// comment to use the default setting
bool oldSyncSetting = std::ios_base::sync_with_stdio(false);

double ctime = benchmark(cread, iterations);
double cpptime = benchmark(cppread, iterations);

// comment if oldSyncSetting's declaration is commented
std::ios_base::sync_with_stdio(oldSyncSetting);

std::cout << "C : " << ctime << "\n"
"C++: " << cpptime << "\n";

return 0;
}

C++ iostream vs. C stdio performance/overhead

What's causing a significant difference in performance is a significant difference in the overall functionality.

I will do my best to compare both of your seemingly equivalent approaches in details.

In C:

Looping

  • Read characters until a newline or end-of-file is detected or max length (1024) is reached
  • Tokenize looking for the hardcoded white-space delimiter
  • Parse into double without any questions

In C++:

Looping

  • Read characters until one of the default delimiters is detected. This isn't limiting the detection to your actual data pattern. It will check for more delimiters just in case. Overhead everywhere.
  • Once it found a delimiter, it will try to parse the accumulated string gracefully. It won't assume a pattern in your data. For example, if there is 800 consecutive numeric characters and isn't a good candidate for the type anymore, it must be able to detect that possibility by itself, so it adds some overhead for that.

One way to improve performance that I'd suggest is very near of what Peter said in above comments. Use getline inside operator>> so you can tell about your data. Something like this should be able to give some of your speed back, thought it's somehow like C-ing a part of your code back:

istream &operator>>(istream &in, point &p) {
char bufX[10], bufY[10];
in.getline(bufX, sizeof(bufX), ' ');
in.getline(bufY, sizeof(bufY), '\n');
p.x = atof(bufX);
p.y = atof(bufY);
return in;
}

Hope it's helpful.

Edit: applied nneonneo's comment

Why just including iostream.h makes executable weigh 1mb more?

So, where is the trouble and how do I fix it?

Including <iostream> instantiates the global variables std::cout, std::cin and std::cerr, and thus links in the whole c++ I/O library.

The only way to fix this, is not including <iostream>, if you don't need anything from there.

How can I make sure `iostream` is available to the linker?

Based on the comments and posted answer I realized that the blog from which I was copying those commands makes things more complicated than they really need to be for my purposes. It's definitely possible to isolate every step of the compilation process using solely the g++ command. Here's a Makefile I came up with:

all: preprocess compile assemble link

# helloworld.i contains preprocessed source code
preprocess:
@echo "\nPREPROCESSING\n"; g++ -E -o helloworld.i helloworld.cpp

# compile preprocessed source code to assembly language.
# hello.s will contain assembly code
compile:
@echo "\nCOMPILATION\n"; g++ -S helloworld.i

# convert assembly to machine code
assemble:
@echo "\nASSEMBLY\n"; g++ -c helloworld.s

# links object code with the library code to produce an executable
# libraries need to be specified here
link:
@echo "\nLINKING\n"; g++ helloworld.o -o test

clean:
@find -type f ! -name "*.cpp" ! -name "*.h" ! -name "Makefile" -delete

Now I can compile my C++ programs in such a way that I can track whether the preprocessor, compiler, assembler or linker is generating the error.

Why do I get an including iostream.h ?

The iostream header is <iostream>, not <iostream.h>. The error you're getting suggests that the compiler is looking for iostream.h, which suggests that you might be including the wrong header.

Try changing the header to <iostream> and see if that fixes the problem. More generally, make sure you aren't including any C++ standard library header files suffixed with .h unless they come from C as well (in which case you should probably use the C++ versions of the headers anyway).

Hope this helps!



Related Topics



Leave a reply



Submit