Is Micro-Optimization Worth the Time

Is micro-optimization worth the time?

Micro-optimisation is worth it when you have evidence that you're optimising a bottleneck.

Usually it's not worth it - write the most readable code you can, and use realistic benchmarks to check the performance. If and when you find you've got a bottleneck, micro-optimise just that bit of code (measuring as you go). Sometimes a small amount of micro-optimisation can make a huge difference.

But don't micro-optimise all your code... it will end up being far harder to maintain, and you'll quite possibly find you've either missed the real bottleneck, or that your micro-optimisations are harming performance instead of helping.

Micro optimization, is it optimized anyway by modern browsers?

Don't optimize prematurely. Unless some profiling shows that these things in the code actually cause some sort of bottleneck, or disproportionate resource use, don't bother optimizing them along theories on performance.

As for the actual performance: object attribute lookups (such as Math.floor or this.currentX) are o(1) operations, as they are effectively hashmap lookups. Saving them to a variable as such looks like more of a readability enhancement than anything.

When a optimization is no longer Micro-optimization

Whether an optimization is micro or not is usually not important. The important stuff is whether it gives you any bang for the buck.

You wrote you spend two whole working days for a 5% performance increase. Did you spend those days wisely? Was those things you fixed the "most slow" part of your application, or at least those most easy to fix performance issues? Did your changes made you reach your performance target (that you didn't do before)? Does 5% performance matter at all in your case? Usually you want something like 100% or 1000% increase if you figure out that you need to improve your performance.

Could you perform your optimizations without disturbing readability and/or maintainability of the code?

Besides, what other costs did those optimizations render? How much regression test were you required to perform? How many new bugs did you create?

I know, this looks more like questions than an answer, but those are the kind of questions that should rule your decision to make an optimization or not.

When a optimization is no longer Micro-optimization

Whether an optimization is micro or not is usually not important. The important stuff is whether it gives you any bang for the buck.

You wrote you spend two whole working days for a 5% performance increase. Did you spend those days wisely? Was those things you fixed the "most slow" part of your application, or at least those most easy to fix performance issues? Did your changes made you reach your performance target (that you didn't do before)? Does 5% performance matter at all in your case? Usually you want something like 100% or 1000% increase if you figure out that you need to improve your performance.

Could you perform your optimizations without disturbing readability and/or maintainability of the code?

Besides, what other costs did those optimizations render? How much regression test were you required to perform? How many new bugs did you create?

I know, this looks more like questions than an answer, but those are the kind of questions that should rule your decision to make an optimization or not.

When is the optimization really worth the time spent on it?

There are (at least) two categories of "efficiency" to mention here:

  • UI applications (and their dependencies), where the most important measure is the response time to the user.

  • Batch processing, where the main indicator is total running time.


In the first case, there are well-documented rules about response times. If you care about product quality, you need to keep response times short. The shorter the better, of course, but the breaking points are about:

  • 100 ms for an "immediate" response; animation and other "real-time" activities need to happen at least this fast;

  • 1 second for an "uninterrupted" response. Any more than this and users will be frustrated; you also need to start think about showing a progress screen past this point.

  • 10 seconds for retaining user focus. Any worse than this and your users will be pissed off.

If you're finding that several operations are taking more than 10 seconds, and you can fix the performance problems with a sane amount of effort (I don't think there's a hard limit but personally I'd say definitely anything under 1 man-month and probably anything under 3-4 months), then you should definitely put the effort into fixing it.

Similarly, if you find yourself creeping past that 1-second threshold, you should be trying very hard to make it faster. At a minimum, compare the time it would take to improve the performance of your app with the time it would take to redo every slow screen with progress dialogs and background threads that the user can cancel - because it is your responsibility as a designer to provide that if the app is too slow.

But don't make a decision purely on that basis - the user experience matters too. If it'll take you 1 week to stick in some async progress dialogs and 3 weeks to get the running times under 1 second, I would still go with the latter. IMO, anything under a man-month is justifiable if the problem is application-wide; if it's just one report that's run relatively infrequently, I'd probably let it go.

If your application is real-time - graphics-related for example - then I would classify it the same way as the 10-second mark for non-realtime apps. That is, you need to make every effort possible to speed it up. Flickering is unacceptable in a game or in an image editor. Stutters and glitches are unacceptable in audio processing. Even for something as basic as text input, a 500 ms delay between the key being pressed and the character appearing is completely unacceptable unless you're connected via remote desktop or something. No amount of effort is too much for fixing these kinds of problems.


Now for the second case, which I think is mostly self-evident. If you're doing batch processing then you generally have a scalability concern. As long as the batch is able to run in the time allotted, you don't need to improve it. But if your data is growing, if the batch is supposed to run overnight and you start to see it creeping into the wee hours of the morning and interrupting people's work at 9:15 AM, then clearly you need to work on performance.

Actually, you really can't wait that long; once it fails to complete in the required time, you may already be in big trouble. You have to actively monitor the situation and maintain some sort of safety margin - say a maximum running time of 5 hours out of the available 6 before you start to worry.

So the answer for batch processes is obvious. You have a hard requirement that the bast must finish within a certain time. Therefore, if you are getting close to the edge, performance must be improved, regardless of how difficult/costly it is. The question then becomes what is the most economical means of improving the process?

If it costs significantly less to just throw some more hardware at the problem (and you know for a fact that the problem really does scale with hardware), then don't spend any time optimizing, just buy new hardware. Otherwise, figure out what combination of design optimization and hardware upgrades is going to get you the best ROI. It's almost purely a cost decision at this point.


That's about all I have to say on the subject. Shame on the people who respond to this with "YAGNI". It's your professional responsibility to know or at least find out whether or not you "need it." Assuming that anything is acceptable until customers complain is an abdication of this responsibility.

Simply because your customers don't demand it doesn't mean you don't need to consider it. Your customers don't demand unit tests, either, or even reasonably good/maintainable code, but you provide those things anyway because it is part of your profession. And at the end of the day, your customers will be a lot happier with a smooth, fast product than with any of those other developer-centric things.

Is it worth writing part of code in C instead of C++ as micro-optimization?

I'm going to agree with a lot of the comments. C syntax is supported, intentionally (with divergence only in C99), in C++. Therefore all C++ compilers have to support it. In fact I think it's hard to find any dedicated C compilers anymore. For example, in GCC you'll actually end up using the same optimization/compilation engine regardless of whether the code is C or C++.

The real question is then, does writing plain C code and compiling in C++ suffer a performance penalty. The answer is, for all intents and purposes, no. There are a few tricky points about exceptions and RTTI, but those are mainly size changes, not speed changes. You'd be so hard pressed to find an example that actually takes a performance hit that it doesn't seem worth it do write a dedicate module.

What was said about what features you use is important. It is very easy in C++ to get sloppy about copy semantics and suffer huge overheads from copying memory. In my experience this is the biggest cost -- in C you can also suffer this cost, but not as easily I'd say.

Virtual function calls are ever so slightly more expensive than normal functions. At the same time forced inline functions are cheaper than normal function calls. In both cases it is likely the cost of pushing/popping parameters from the stack that is more expensive. Worrying about function call overhead though should come quite late in the optimization process -- as it is rarely a significant problem.

Exceptions are costly at throw time (in GCC at least). But setting up catch statements and using RAII doesn't have a significant cost associated with it. This was by design in the GCC compiler (and others) so that truly only the exceptional cases are costly.

But to summarize: a good C++ programmer would not be able to make their code run faster simply by writing it in C.

Is it worth to micro optimize with redux stores in react

This question comes up pretty frequently in the Redux community, and there are benefits to type 2, if you are interested in optimizing your rendering. It definitely helps when you begin to render more than a few hundred items and need to update only a one or a few at a time, which would be the case for an input. Take a look at this list of links for Redux performance:

https://github.com/markerikson/react-redux-links/blob/master/react-performance.md#redux-performance

I find that this slideshow is useful in demonstrating your two scenarios (using a checkbox rather than a text input), along with one more solution.

http://somebody32.github.io/high-performance-redux/



Related Topics



Leave a reply



Submit