Is Python Slower Than Java/C#

Is Python slower than Java/C#?

Don't conflate Language and Run-Time.

Python (the language) has many run-time implementations.

  • CPython is usually interpreted, and will be slower than native-code C#. It might be slower than Java, depending on the Java JIT compiler.

  • JYthon is interpreted in the JVM and has the same performance profile as Java.

  • IronPython relies on the same .NET libraries and IL as C#, so the performance difference will be relatively small.

  • Python can be translated to native code via PyREX, PyToC, and others. In this case, it will generally perform as well as C++. You can -- to an extent -- further optimize C++ and perhaps squeeze out a little bit better performance than unoptimized output from PyREX.

    For more information, see http://arcriley.blogspot.com/2009/03/so-long-pyrex.html

Note that Python (the language) is not slow. Some Python run-times (CPython, for example) will be slower than native-code C++.

Why is my computation so much faster in C# than Python

The answer is simply that Python deals with objects for everything and that it doesn't have JIT by default. So rather than being very efficient by modifying a few bytes on the stack and optimizing the hot parts of the code (i.e., the iteration) – Python chugs along with rich objects representing numbers and no on-the-fly optimizations.

If you tried this in a variant of Python that has JIT (for example, PyPy) I guarantee you that you'll see a massive difference.

A general tip is to avoid standard Python for very computationally expensive operations (especially if this is for a backend serving requests from multiple clients). Java, C#, JavaScript, etc. with JIT are incomparably more efficient.

By the way, if you want to write your example in a more Pythonic manner, you could do it like this:

from datetime import datetime
start_time = datetime.now()

max_number = 20
x = max_number
while True:
i = 2
while i <= max_number:
if x % i: break
i += 1
else:
# x was not divisible by 2...20
break
x += 1

print('number: %d' % x)
print('time elapsed: %d seconds' % (datetime.now() - start_time).seconds)

The above executed in 90 seconds for me. The reason it's faster relies on seemingly stupid things like x being shorter than start, that I'm not assigning variables as often, and that I'm relying on Python's own control structures rather than variable checking to jump in/out of loops.

Why are Python Programs often slower than the Equivalent Program Written in C or C++?

Python is a higher level language than C, which means it abstracts the details of the computer from you - memory management, pointers, etc, and allows you to write programs in a way which is closer to how humans think.

It is true that C code usually runs 10 to 100 times faster than Python code if you measure only the execution time. However if you also include the development time Python often beats C. For many projects the development time is far more critical than the run time performance. Longer development time converts directly into extra costs, fewer features and slower time to market.

Internally the reason that Python code executes more slowly is because code is interpreted at runtime instead of being compiled to native code at compile time.

Other interpreted languages such as Java bytecode and .NET bytecode run faster than Python because the standard distributions include a JIT compiler that compiles bytecode to native code at runtime. The reason why CPython doesn't have a JIT compiler already is because the dynamic nature of Python makes it difficult to write one. There is work in progress to write a faster Python runtime so you should expect the performance gap to be reduced in the future, but it will probably be a while before the standard Python distribution includes a powerful JIT compiler.

Why is this prime sieve so slow in Python compared with the same algorithm in Java or C#?

In various benchmarks, for what they are worth, Python is anywhere from the same speed to 50 times slower than Java, dependent on the problem. This is largely due to Python being interpreted, and Java being compiled (even if not to native). Ruby posts similar scores as Python.

Language design also gives Java and C# some advantages.

There are two good ways to speed things up, aside of more efficient Python methods: use pypy, which essentially bytecompiles your python, similar to Java, or write the critical sections in a faster language such as C, and call those routines from Python, a task which is actually pretty easy, assuming you are good with the fast language.

Why would you want to use C# if its slower than C++?

Who exactly is this "bunch of people"? What are they comparing it against?

For the vast majority of things, C++ is not "much faster" than C#. It certainly has benefits in various situations, particularly where you want more deterministic memory handling, but in my experience the bottleneck in most applications isn't in places where C++ would help. As spoulson says, a lot of performance is in the design instead of the exact implementation - and there, it helps to be able to try different designs easily.

Why would we use C# when it's a bit slower than C++? Because it's generally reckoned (i.e. some disagree :) to be a lot easier to develop in without shooting yourself in the foot.

As for what C# can be used for... what do you want to use it for? Unless you want to develop drivers and kernels, it may well be fine for you. (Even OS development has some folks using C#...)

Job opportunities? Loads.

Downsides? Well, .NET itself is only available on Microsoft platforms. There's Mono, but it doesn't have quite the same degree of portability as Java (no doubt another "slow" language according to the same bunch of people).

Speed of different constructs in programming languages (Java/C#/C++/Python/…)

is there a difference between if (a()) b(); and a() && b(); ?

Yes, readability. The first is far more clear about the intent.

Is a = a * 4 maybe compiled to the same code as a <<= 2 ?

Most likely yes. But even if they ended up as different CPU instructions the difference in time would be very small, and dependent on the instructions before and after.

Micro-Optimizing for modern CPU's is

  • very difficult
  • mostly futile
  • often contrary to what used to be an 'optimization' 10+ years ago.

In conclusion, write readable code first. When you do have a performance problem, profile and measure first.

As an Application Developer you should worry about using the right algorithms, like not reading a collection more than needed etc. But on the instruction/statement level, there are too many layers (C# Compiler, IL Compiler, Optimizers, pipelined CPU) between you and what actually gets executed.

Is C# really slower than say C++?

Warning: The question you've asked is really pretty complex -- probably much more so than you realize. As a result, this is a really long answer.

From a purely theoretical viewpoint, there's probably a simple answer to this: there's (probably) nothing about C# that truly prevents it from being as fast as C++. Despite the theory, however, there are some practical reasons that it is slower at some things under some circumstances.

I'll consider three basic areas of differences: language features, virtual machine execution, and garbage collection. The latter two often go together, but can be independent, so I'll look at them separately.

Language Features

C++ places a great deal of emphasis on templates, and features in the template system that are largely intended to allow as much as possible to be done at compile time, so from the viewpoint of the program, they're "static." Template meta-programming allows completely arbitrary computations to be carried out at compile time (I.e., the template system is Turing complete). As such, essentially anything that doesn't depend on input from the user can be computed at compile time, so at runtime it's simply a constant. Input to this can, however, include things like type information, so a great deal of what you'd do via reflection at runtime in C# is normally done at compile time via template metaprogramming in C++. There is definitely a trade-off between runtime speed and versatility though -- what templates can do, they do statically, but they simply can't do everything reflection can.

The differences in language features mean that almost any attempt at comparing the two languages simply by transliterating some C# into C++ (or vice versa) is likely to produce results somewhere between meaningless and misleading (and the same would be true for most other pairs of languages as well). The simple fact is that for anything larger than a couple lines of code or so, almost nobody is at all likely to use the languages the same way (or close enough to the same way) that such a comparison tells you anything about how those languages work in real life.

Virtual Machine

Like almost any reasonably modern VM, Microsoft's for .NET can and will do JIT (aka "dynamic") compilation. This represents a number of trade-offs though.

Primarily, optimizing code (like most other optimization problems) is largely an NP-complete problem. For anything but a truly trivial/toy program, you're pretty nearly guaranteed you won't truly "optimize" the result (i.e., you won't find the true optimum) -- the optimizer will simply make the code better than it was previously. Quite a few optimizations that are well known, however, take a substantial amount of time (and, often, memory) to execute. With a JIT compiler, the user is waiting while the compiler runs. Most of the more expensive optimization techniques are ruled out. Static compilation has two advantages: first of all, if it's slow (e.g., building a large system) it's typically carried out on a server, and nobody spends time waiting for it. Second, an executable can be generated once, and used many times by many people. The first minimizes the cost of optimization; the second amortizes the much smaller cost over a much larger number of executions.

As mentioned in the original question (and many other web sites) JIT compilation does have the possibility of greater awareness of the target environment, which should (at least theoretically) offset this advantage. There's no question that this factor can offset at least part of the disadvantage of static compilation. For a few rather specific types of code and target environments, it can even outweigh the advantages of static compilation, sometimes fairly dramatically. At least in my testing and experience, however, this is fairly unusual. Target dependent optimizations mostly seem to either make fairly small differences, or can only be applied (automatically, anyway) to fairly specific types of problems. Obvious times this would happen would be if you were running a relatively old program on a modern machine. An old program written in C++ would probably have been compiled to 32-bit code, and would continue to use 32-bit code even on a modern 64-bit processor. A program written in C# would have been compiled to byte code, which the VM would then compile to 64-bit machine code. If this program derived a substantial benefit from running as 64-bit code, that could give a substantial advantage. For a short time when 64-bit processors were fairly new, this happened a fair amount. Recent code that's likely to benefit from a 64-bit processor will usually be available compiled statically into 64-bit code though.

Using a VM also has a possibility of improving cache usage. Instructions for a VM are often more compact than native machine instructions. More of them can fit into a given amount of cache memory, so you stand a better chance of any given code being in cache when needed. This can help keep interpreted execution of VM code more competitive (in terms of speed) than most people would initially expect -- you can execute a lot of instructions on a modern CPU in the time taken by one cache miss.

It's also worth mentioning that this factor isn't necessarily different between the two at all. There's nothing preventing (for example) a C++ compiler from producing output intended to run on a virtual machine (with or without JIT). In fact, Microsoft's C++/CLI is nearly that -- an (almost) conforming C++ compiler (albeit, with a lot of extensions) that produces output intended to run on a virtual machine.

The reverse is also true: Microsoft now has .NET Native, which compiles C# (or VB.NET) code to a native executable. This gives performance that's generally much more like C++, but retains the features of C#/VB (e.g., C# compiled to native code still supports reflection). If you have performance intensive C# code, this may be helpful.

Garbage Collection

From what I've seen, I'd say garbage collection is the poorest-understood of these three factors. Just for an obvious example, the question here mentions: "GC doesn't add a lot of overhead either, unless you create and destroy thousands of objects [...]". In reality, if you create and destroy thousands of objects, the overhead from garbage collection will generally be fairly low. .NET uses a generational scavenger, which is a variety of copying collector. The garbage collector works by starting from "places" (e.g., registers and execution stack) that pointers/references are known to be accessible. It then "chases" those pointers to objects that have been allocated on the heap. It examines those objects for further pointers/references, until it has followed all of them to the ends of any chains, and found all the objects that are (at least potentially) accessible. In the next step, it takes all of the objects that are (or at least might be) in use, and compacts the heap by copying all of them into a contiguous chunk at one end of the memory being managed in the heap. The rest of the memory is then free (modulo finalizers having to be run, but at least in well-written code, they're rare enough that I'll ignore them for the moment).

What this means is that if you create and destroy lots of objects, garbage collection adds very little overhead. The time taken by a garbage collection cycle depends almost entirely on the number of objects that have been created but not destroyed. The primary consequence of creating and destroying objects in a hurry is simply that the GC has to run more often, but each cycle will still be fast. If you create objects and don't destroy them, the GC will run more often and each cycle will be substantially slower as it spends more time chasing pointers to potentially-live objects, and it spends more time copying objects that are still in use.

To combat this, generational scavenging works on the assumption that objects that have remained "alive" for quite a while are likely to continue remaining alive for quite a while longer. Based on this, it has a system where objects that survive some number of garbage collection cycles get "tenured", and the garbage collector starts to simply assume they're still in use, so instead of copying them at every cycle, it simply leaves them alone. This is a valid assumption often enough that generational scavenging typically has considerably lower overhead than most other forms of GC.

"Manual" memory management is often just as poorly understood. Just for one example, many attempts at comparison assume that all manual memory management follows one specific model as well (e.g., best-fit allocation). This is often little (if any) closer to reality than many peoples' beliefs about garbage collection (e.g., the widespread assumption that it's normally done using reference counting).

Given the variety of strategies for both garbage collection and manual memory management, it's quite difficult to compare the two in terms of overall speed. Attempting to compare the speed of allocating and/or freeing memory (by itself) is pretty nearly guaranteed to produce results that are meaningless at best, and outright misleading at worst.

Bonus Topic: Benchmarks

Since quite a few blogs, web sites, magazine articles, etc., claim to provide "objective" evidence in one direction or another, I'll put in my two-cents worth on that subject as well.

Most of these benchmarks are a bit like teenagers deciding to race their cars, and whoever wins gets to keep both cars. The web sites differ in one crucial way though: they guy who's publishing the benchmark gets to drive both cars. By some strange chance, his car always wins, and everybody else has to settle for "trust me, I was really driving your car as fast as it would go."

It's easy to write a poor benchmark that produces results that mean next to nothing. Almost anybody with anywhere close to the skill necessary to design a benchmark that produces anything meaningful, also has the skill to produce one that will give the results he's decided he wants. In fact it's probably easier to write code to produce a specific result than code that will really produce meaningful results.

As my friend James Kanze put it, "never trust a benchmark you didn't falsify yourself."

Conclusion

There is no simple answer. I'm reasonably certain that I could flip a coin to choose the winner, then pick a number between (say) 1 and 20 for the percentage it would win by, and write some code that would look like a reasonable and fair benchmark, and produced that foregone conclusion (at least on some target processor--a different processor might change the percentage a bit).

As others have pointed out, for most code, speed is almost irrelevant. The corollary to that (which is much more often ignored) is that in the little code where speed does matter, it usually matters a lot. At least in my experience, for the code where it really does matter, C++ is almost always the winner. There are definitely factors that favor C#, but in practice they seem to be outweighed by factors that favor C++. You can certainly find benchmarks that will indicate the outcome of your choice, but when you write real code, you can almost always make it faster in C++ than in C#. It might (or might not) take more skill and/or effort to write, but it's virtually always possible.



Related Topics



Leave a reply



Submit