Performance Zone is brought to you in partnership with:

Senior Java developer, one of the top stackoverflow users, fluent with Java and Java technology stacks - Spring, JPA, JavaEE. Founder and creator of http://computoser.com and http://welshare.com . Worked on Ericsson projects, Bulgarian e-government projects and large scale recruitment platforms. Member of the jury of the International Olympiad in Linguistics and the Program committee of the North American Computational Linguistics Olympiad. Bozhidar is a DZone MVB and is not an employee of DZone and has posted 83 posts at DZone. You can read more from them at their website. View Full User Profile

Not All Optimization Is Premature

12.06.2012
| 5607 views |
  • submit to reddit

The other day the reddit community discarded my advice for switching from text-based to binary serialization formats. It was labeled “premature optimization”. I’ll zoom out of the particular case, and discuss why not all optimization is premature.

Everyone has heard of Donald Knuth’s phrase “[..] premature optimization is the root of all evil”. And as with every well-known phrase, this one is usually misinterpreted. And to such an extent that people think optimizing something which is not a bottleneck is bad. That being the case, many system are unnecessarily heavy and consume a lot of resources…because there is no bottleneck.

What has Knuth meant? That it is wrong to optimize if that is done at the cost of other important variables: readability, maintainability, time. Optimizing an algorithm can make it harder to read. Optimizing a big system can make it harder to maintain. Optimizing anything can take time that should probably be spent implementing functionality or fixing bugs. In practice, this means that you should not add sneaky if-clauses and memory workarounds in your code, that you shouldn’t introduce new tools or layers in your system for the sake of some potential gains in processing speed, and you shouldn’t spend a week on gaining 5% in performance. However, most interpretations say “you shouldn’t optimize for performance until it hits you”. And that’s where my opinion differs.

If you wait for something to “hit” you, then you are potentially in trouble. You must make your system optimal before it goes into production, otherwise it may be too late (meaning – a lot of downtime, lost customers, huge bills for hardware/hosting). Furthermore, “bottlenecks” are not that obvious with big systems. If you have 20 servers, will you notice that one piece of code takes up 70% more resource than it should? What if there are 10 such pieces. There is no obvious bottleneck, but optimizing the code may save you 2-3 servers. That’s why writing optimal code is not optional and is certainly not “premature optimization”. But let me give a few examples:

  • you notice that in some algorithms that are supposed to be invoked thousands of times, a linked list is used where random access is required. Is it premature optimization to change it to array/array list? No – it takes very little time, and does not make the code harder to read. Yet, it may increase the speed of the application a lot (how much is ‘a lot’ doesn’t even matter in that case)
  • you realize that a piece of code (including db access) is executed many times, but the data doesn’t change. This rarely accounts for a big percentage of the time needed to process a request. Is it premature optimization to cache the results (provided you have a caching framework that can handle cache invalidation, cache lifetime, etc.)? No – caching the things would save tons of database requests, without making your code harder to read (with declarative caching it will be just an annotation).
  • you measure that if you switch from a text to a binary format for transmitting messages within internal components you can do it 50%+ faster with half the memory. The system does not have huge memory needs, and the CPU is steady below 50%. Is replacing the text format with a binary one a premature optimization? No, because it costs 1 day, you don’t loose code readability (the change is only one line of configuration) and you don’t loose the means to debug your messages (you can dump them before/after serialization, or you can switch to text-based format in development mode. (yeah, that’s my case from the previous blogpost)

So, with these kinds of things, you saved a lot of processing power and memory even though you didn’t have any problems with that. But you didn’t have the problems either because you had enough hardware to mask them or you didn’t have enough traffic/utilization to actually see them. And performance tests/profiling didn’t show a clear bottle-neck. Then you optimize “in advance”, but not prematurely.

An important note here is that I mean mainly web applications. For desktop applications the deficiencies do not multiply. If you have a piece of desktop code that makes the system consume 20% more memory, (ask Adobe) then whatever – people have a lot of memory nowadays. But if your web application consumes 20% more memory for each user on the system, and you have 1 millions users, then the absolute value if huge (although it’s still “just” 20%).

The question is – is there a fine line between premature and proper optimization? Anything that makes the code “ugly” and does not solve a big problem is premature. Anything that takes two weeks to improve performance 5% is premature. Anything that is explained with “but what if some day trillions of aliens use our software” is premature. But anything that improves performance without affecting readability is a must. And anything that improves performance by just a better configuration is a must. And anything that makes the system consume 30% less resources and takes a day to implement is a must. To summarize – if neither readability, not maintainability are damaged and the time taken is negligible – go for it.

If every optimization is labeled as “premature”, a system may fail without any visible performance bottleneck. So assess each optimization rather than automatically concluding it’s premature.

Published at DZone with permission of Bozhidar Bozhanov, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Riccardo Cossu replied on Thu, 2012/12/06 - 8:42am

Great post!

I think that dogmatic labels like "premature optimization" are always bad in software development; as long as you know what you're doing (and thus you know pros and cons of the different possible solutions) and you make an informed choice there's nothing wrong in optimizing like you wrote.

Silvio Bierman replied on Thu, 2012/12/06 - 11:32am

Wether optimization is premature is by definition a judgement call. It is stupid to ignore performance aspects during development. It is also stupid to put too much focus on it.

A good developer will do things right the first time. Using a linked list if you need to index it is plain stupid and a good developer will never do that. No need to optimize if you did the right thing the first time.

Once it becomes unclear what the right thing is one should pick the simplest solution. It is always wise to maintain a set of side notes where developers can annotate potential room for optimization.

You caching example is something that might very well qualify as premature opimization. It is not a transparent optimization since it introduces:
-memory overhead
-staleness issues
-an extra dependency with an external library or system
-general complexity

Only if can be proven that these are outweighed by advantages can caching be considered a viable option.

Depending on the situation the same thing might go for the binary format. It has some disadvantages as well:
-potential for hardware dependencies (bit-ness, endian-ness)
-reduced transparency at the transport level (logging/debugging of communication)
-ordering-, representation- and in general version-sensitive and thus less flexible

You said yourself that it was easy to convert the plaintext communication by binary. Why not do that afterwards when the system is functioning and drastic changes in the protocol become less likely AND you can demonstrate changing it adds value?

As I said: these are all judgement calls. Knuth did not mean to say we should write inefficient programs. He meant to say that it is more important for code to be correct than efficient. If the former can be guaranteed in the broadest sense feel free to focus on the latter.

Lund Wolfe replied on Sat, 2012/12/08 - 4:08pm

It is a judgement call that shows your aptitude and experience.

I agree with the general XP principle of "You Aren't Gonna Need It".  Get the basic design right and get the system functional first.  If it is modular or OO you should be able to change implementation and optimize later as needed.  Don't add complexity and risk or hamper readability/maintainability up front.

At the same time, don't make it look like you were born yesterday by not following best practices or not using correct data structures and euphemistically call it "technical debt".  Get it right the first time.

In general I would prefer a readable/maintainable format like text but I think best practices indicates a binary format for network communication efficiency over xml/text, unless platform independence requires text.  "Don't Repeat Yourself" and waste time, space, or code.  Use a cache or data structure to do one thing in one place.

Philippe Lhoste replied on Wed, 2012/12/19 - 6:17am

Of course, I agree with the article.

Sometime, the line is thin between "optimizing an algorithm" and "fixing a bug making the performance to be poor". The simple linked list example is quite illustrative: a good coder wouldn't have used the wrong structure, so it is more on the bug side. Another example is using string concatenation in a loop for a string: not using a StringBuilder is on the verge of a bug (unless you know to have always a few dozen of iterations, perhaps).

Cases might be subtler. We have a JTable with lot of elements, taking quite a chunk of memory. Sorting is done via an indirection array; well, an array list, actually, as the number of elements can vary. So we stored the Integer type in it, instead of simple ints. The memory impact was sizable. Yes, it is a desktop application, and users are supposed to have plenty of memory, but yet you don't necessarily want to saturate their system!

Some reasons: in enterprises, they don't necessarily have recent computers, memory can not be as plentiful as you say; in Java applications, we have to set a maximum memory usage, you should remain in these constraints; and users just don't like to see their system to go swapping because a rogue application is eating up memory. I know that Firefox devs worked a lot of improve the browser with regard to memory consumption, so this is still relevant to the desktop, too. And don't forget that 32-bit Windows is still limited in maximum usable memory.

Back to my JTable, a fellow developer made a specialized ArrayList for integers. It was fine and dandy, but we need to sort the list by using a comparator (since the integers are just indirection indexes of the data to really compare). Probably for lack of time, my colleague took the easy route: dump all the data in an ArrayList<Integer> and use Collection.sort() to sort it with a comparator.

As a result, we had a peak in memory consumption, and lot of garbage generated (all these Integer), so the GC was eating lot of CPU to take care of it: it could take several minutes to display the JTable!

I bit the bullet and implemented a sort myself (using a modified version of the Introsort, I am not fool enough to create my own algorithm...), replacing the hard-coded '<' signs with the provided comparator. And while still slow (lot of data to transmit client-server), the memory usage and speed were significantly improved.

This lengthly anecdote (sorry...) is to show that desktop optimizations are still often needed, and that raw CPU power and memory galore are not an excuse for sloppy (or hasty) solutions. :-)

Robert Brown replied on Wed, 2012/12/19 - 9:37am in response to: Philippe Lhoste

Hi Philippe,  

I would call what you did with JTable to be Just In Time optimization.  You implemented an algorithm with the most readable and maintainable code in the shortest amount of time by using an Integer ArrayList.  After noticing the impact to memory and speed you made adjustments because those impacts were not acceptable.  This is an example of good optimization techniques.  

If instead you had decided to rewrite the JTable to perform sort and optimized the data structures that were used internally to do sorting quickly and with low memory BEFORE noticing the impact of the JTable, that's premature optimization.

Note I would say that the binary implementation in the article borders on the premature as well.  If the code was already written with the string implementation, then there was no need to re-implement.  If on the other hand this was a design discussion of what would be best to do, then, because it is a small amount of time and it has a low impact on readability and maintainability it would probably be a good use of the time to implement.

As was said above, I believe the fundamentals (readability and maintainability) should be correct before optimization takes place.  Optimizing during design is OK if you know of a possible bottleneck going in. Eg. if you have a complex UI don't use nested divs in HTML when a single div with the appropriate CSS will work (and in my opinion at least will make things more readable and more maintainable as well :)).  

Thanks
Bob

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.