Jump to content
Mike Torrettinni

How do you identify bottleneck in Delphi code?

Recommended Posts

I'm interested how you guys identify bottlenecks in your source code.

When I profile my code, I usually try to optimize anything that could reduce execution time by least 10%+.

It also depends what I see that can be optimized or is very clearly badly designed. Sometimes I think I can improve it and it ends up even worse 😉

 

When do you decide you will try to optimize certain parts of the code?

How you define bottlenecks, is it the expected percentage of improvement (5,10, 50%...)?

Do you ever optimize something that is not a bottleneck, but a customer/user is requesting performance improvements?


 

Edited by Mike Torrettinni

Share this post


Link to post
Guest

in fact, only in critical opearations, or, big processing, you really will see it!

---> the Delay can be perseptible, comparing the execution of the same process on different systems (hardware or O.S.). Taking into account the characteristics of each system.

 

Minimizing code doesn't always mean performance!


This is very relative, and each professional will give you an answer that best suits you, and it will make more sense than another.

 

For me, it only makes sense to give values in percentages, if, I said, if you know exactly what the total value is for any type of statistic.

 

So, if I know that 100% is the maximum and desired value for a "Select * from Employee", any results other than that, can be calculated.

 

Now, if I have no idea what the maximum value for a process is, then how could I estimate a percentage?

 

Okay, you could say, add up all the values you found and see what was the maximum, then you will have "100%". Will be?

 

And, if I arbitrarily want to use the average value, or, average deviation from the values found, like my "100%"?

 

Is this value correct as a parameter of the values to be found? that is, is it really "100%"?

Understood?

 

I prefer to stick with the following: for this code, this performance is acceptable! For this other, it is not acceptable! So, let's see what we can fix here ...

 

 

hug

Share this post


Link to post

Are you using a Profiler to measure performance of your application, or simply timing specific code snippets?  I'd suggest a Profiler if you aren't using one now...you'll tyipcally see the big bottlenecks pretty easily.

  • Like 1

Share this post


Link to post
1 minute ago, Darian Miller said:

Are you using a Profiler to measure performance of your application, or simply timing specific code snippets?  I'd suggest a Profiler if you aren't using one now...you'll tyipcally see the big bottlenecks pretty easily.

Yes, I use profiler to see what methods gets called most and what methods take what percentage of total runtime. Then I decide which ones to optimize sometimes based on percentage of total runtime, sometimes based on number of calls.

 

8 minutes ago, emailx45 said:

I prefer to stick with the following: for this code, this performance is acceptable! For this other, it is not acceptable! So, let's see what we can fix here ...

You don't measure the runtime of methods?

 

Sometimes is very obvious what is bottleneck, sometimes is not... if you have 5 methods takes 15% of total runtime each, how do you decide which one to work on? Some methods could use some simple optimization tricks, some need different data structure to work with, different sorting algorithms... it can take hours to implement new approach, but then you measure it and you didn't gain anything.

Share this post


Link to post

I've been reading a lot about optimization and it's interesting to see that phrase 'premature optimization' is usually just a BS, except for extremely basic cases. I agree with this and by attempting some minor optimizations I learned how to optimize other bigger parts of the code.

Share this post


Link to post

As a developer improves, (hopefully) the developer can write more optimized code in about the same amount of time as they used to write less optimized code. Knowing when to use a TStringBuilder instead of concatenating strings for example, and knowing when doing so will make a difference. Recognizing the use cases and intuitively selecting the data structure most suited for it. etc. Often, it does not take much longer to write faster code than slower code, if you know what you are doing.

 

Beyond that, I would not do much optimization until you had specific use cases that were performing badly. Then, you need to profile those use cases. Once those have been profiled, sometimes it is obvious what to tackle, sometimes sadly its death by 1000 cuts. But I do not not think premature optimization is BS. It happens all the time and is generally a waste. Until you have specific use cases it does not make much sense to make much efforts to optimize, beyond what you are naturally able to do based on your skill level. You can spend hours optimizing some data structure that was never going to be a problem in the first place.

  • Like 2
  • Thanks 1

Share this post


Link to post

 

5 hours ago, Dave Novo said:

You can spend hours optimizing some data structure that was never going to be a problem in the first place.

Isn't that how you learn, get experience? You tackle a problem and try to fix, improve it. Sometimes it works, sometimes the experience is not there, yet, and try another approach or come back at another time.

Share this post


Link to post
12 hours ago, Mike Torrettinni said:

I've been reading a lot about optimization and it's interesting to see that phrase 'premature optimization' is usually just a BS, except for extremely basic cases. I agree with this and by attempting some minor optimizations I learned how to optimize other bigger parts of the code.

Sorry, can't agree. There are many good reasons to rework code, and performance is just one of them. When I was learning to code, many years ago, a colleague advised "first make it work, then make it fast."

 

Performance optimization should be done because of performance issues, not simply to see whether you can buy another 10% improvement in a routine which doesn't affect user perception at all.

 

Good reasons to rework code include clarity, reduced coupling, better organization of modules, improving naming, adding documentation of the need for the actions in routines. 

 

Seeking to optimize every routine in a program is a waste of energy, and a needless cost. Justifiable for a hobby, or in open-source, perhaps, but not in commercial work.

  • Like 4

Share this post


Link to post

@Mike Torrettinni when you talk about a 10% improvement, do you mean in the execution time for one function, or do you mean the overall execution time for the program, or perhaps the user visible task within the program?

Share this post


Link to post
12 hours ago, Mike Torrettinni said:

I've been reading a lot about optimization and it's interesting to see that phrase 'premature optimization' is usually just a BS, except for extremely basic cases. I agree with this and by attempting some minor optimizations I learned how to optimize other bigger parts of the code.

Problem with quoting is that usually it is either cut down to the point of being misquoting or it requires more context to be properly understood. Knuth is spot on, and if you read more besides the "Premature optimization is root of all evil" phrase, it will be more clear to you what it really means.  

 

https://softwareengineering.stackexchange.com/questions/80084/is-premature-optimization-really-the-root-of-all-evil

 

  • Like 2

Share this post


Link to post
Guest

maybe, I said "maybe" the "KISS" principle have some way to modarate this topic"

https://en.wikipedia.org/wiki/KISS_principle

 

Of course, many know it, but other not! Then, let's revive it a little more:  by WikiPedia for help with so many words! :classic_rolleyes:

Quote

KISS, an acronym for keep it simple, stupid, is a design principle noted by the U.S. Navy in 1960.[1][2] The KISS principle states that most systems work best if they are kept simple rather than made complicated; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided. The phrase has been associated with aircraft engineer Kelly Johnson.[3] The term "KISS principle" was in popular use by 1970.[4] Variations on the phrase include: "Keep it simple, silly", "keep it short and simple", "keep it simple and straightforward",[5] "keep it small and simple", "keep it simple, soldier",[6] or "keep it simple, sailor".

Quote

Origin

The acronym was reportedly coined by Kelly Johnson, lead engineer at the Lockheed Skunk Works (creators of the Lockheed U-2 and SR-71 Blackbird spy planes, among many others).[3]

While popular usage has transcribed it for decades as "Keep it simple, stupid", Johnson transcribed it as "Keep it simple stupid" (no comma), and this reading is still used by many authors.[7]

The principle is best exemplified by the story of Johnson handing a team of design engineers a handful of tools, with the challenge that the jet aircraft they were designing must be repairable by an average mechanic in the field under combat conditions with only these tools. Hence, the "stupid" refers to the relationship between the way things break and the sophistication available to repair them.

The acronym has been used by many in the U.S. military, especially the U.S. Navy and United States Air Force, and in the field of software development.

Quote

Variants

The principle most likely finds its origins in similar minimalist concepts, such as Occam's razor, Leonardo da Vinci's "Simplicity is the ultimate sophistication", Shakespeare's "Brevity is the soul of wit", Mies Van Der Rohe's "Less is more", Bjarne Stroustrup's "Make Simple Tasks Simple!", or Antoine de Saint Exupéry's "It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away". Colin Chapman, the founder of Lotus Cars, urged his designers to "Simplify, then add lightness". Heath Robinson machines and Rube Goldberg's machines, intentionally overly-complex solutions to simple tasks or problems, are humorous examples of "non-KISS" solutions.

A variant – "Make everything as simple as possible, but not simpler" – is attributed to Albert Einstein, although this may be an editor's paraphrase of a lecture he gave.[8]

 

OK, OK,... any other can use another "principle" for refute me!!! Dont worry! the life is full with it!!

 

But, look at the names involved in this simple, but effective in essence !!!
The name of the principle, and, its context may seem innovative ... but it is not!
It has been going back for many centuries, and, used by many notorious (of what we call "genius") in several areas of knowledge!

So, yes, in many cases, it is necessary to revisit the code, however, in its general context, "Keep It Simple, Stupid"

hug

 

 

Share this post


Link to post
53 minutes ago, Bill Meyer said:

Sorry, can't agree. There are many good reasons to rework code, and performance is just one of them. When I was learning to code, many years ago, a colleague advised "first make it work, then make it fast."

 

Performance optimization should be done because of performance issues, not simply to see whether you can buy another 10% improvement in a routine which doesn't affect user perception at all.

 

Good reasons to rework code include clarity, reduced coupling, better organization of modules, improving naming, adding documentation of the need for the actions in routines. 

 

Seeking to optimize every routine in a program is a waste of energy, and a needless cost. Justifiable for a hobby, or in open-source, perhaps, but not in commercial work.

I do understand that not all applications need to be really fast and optimized. My project is kind of a parser project, so will never be fast enough - or, best outcome would be for the parsing process to be done within a timeframe of user pressing mouse button down and releasing it.

 

So, I'm trying to be better at assessing bottlenecks. If 10% reduced process runtime is not worth improving, what is your criteria? How did you decide your last bottleneck needs fixing?

 

Share this post


Link to post
4 minutes ago, Mike Torrettinni said:

10% reduced process runtime

This is for sure worth doing.

 

But is that what you mean when you talk about a 10% improvement above? Or are you measuring 10% in just a single function?

 

And sorry to hark back to it, but in your StringReplace topic recently you were concentrating on utterly the wrong thing. There was a massive opportunity to optimise that code, and you have chosen not to take it.  For reasons that escape me.

  • Like 1

Share this post


Link to post
2 hours ago, Dalija Prasnikar said:

Problem with quoting is that usually it is either cut down to the point of being misquoting or it requires more context to be properly understood. Knuth is spot on, and if you read more besides the "Premature optimization is root of all evil" phrase, it will be more clear to you what it really means.  

 

https://softwareengineering.stackexchange.com/questions/80084/is-premature-optimization-really-the-root-of-all-evil

 

Thank you, I read the same link a while ago and the best answer has very clearly written:

 

" What this means is that, in the absence of measured performance issues you shouldn't optimize because you think you will get a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that isn't a trivially clear optimization should be avoided until it can be measured. "

 

I understand this as: as long as it's a measured bottleneck, it is worth looking into. 

 

If you want to get away from the theory, I would love to hear about your last bottleneck, how you defined it.

Share this post


Link to post
4 minutes ago, Mike Torrettinni said:

I understand this as: as long as it's a measured bottleneck, it is worth looking into. 

You began with the question of how to identify a bottleneck. The first criterion should be whether it is observable by your users. If you have a button click event which executes in 200mS, and you can cut that in half, you may get satisfaction from doing it, but the user will not see the difference.

Ordinarily, unless a) an action takes at least dozens of seconds to complete, b) is frequently used and c) can be significantly speeded up (by which I mean 2X or more), the time invested is unlikely to be repaid in user satisfaction.

If you are analyzing or converting some kind of data, and the amount to process is large, then you are likely looking in the wrong place. Some years ago, I had a spreadsheet which the app took a few minutes to produce. In the end, it was not code rework, but replacement of some memory datasets which made a difference. Profiling showed that I might improve code execution by 10% or so, but changing to a more suitable component brought a speedup of over 20 times.

  • Like 2
  • Thanks 1

Share this post


Link to post
Quote

I understand this as: as long as it's a measured bottleneck, it is worth looking into. 

In the situation where you have infinite time, and infinite resources, sure, the above statement is true. Most of us are not in that situation.

 

If you want to simply answer, what is a bottleneck, that is easy... some single (or few) part of your program that throttles the entire execution path.

 

The more difficult question is which bottleneck is worth investigating and fixing. For example, lets say you have some truly awful code in one part of the program that is causing something that should take 5 seconds to take 5 minutes. Really bad. But because of how your end users use the program, only 1% of people encounter this bottleneck. And those people all run that part of the program overnight anyhow, as part of a larger batch run. Is this even worth "looking into"? Maybe, if the "looking into" takes 10 minutes.  Maybe not if the "looking into" takes a day. That is a day you are taking away from adding features, or fixing bug that affect the other 99% of your customers.

 

How does your answer change if the bottleneck is such that something that should take 5 seconds takes 12 seconds?

 

IMO, software development is not hard and doesn't fail usually because of programming challenges. i.e. how to code the algorithm, how to add the function. Software Development is hard because it requires choices like the one described above, to which there is no clear cut answer than anyone can give you that applies to all cases. Most coders can figure out how to write whatever SQL query you need and design a database that works. Successful projects / developers / teams consistently come up with the right answers to the more difficult questions like: what refactorings are worth the time it takes to do them?  what level of "engineering" (when are you over-engineering, when are you under-engineering) is appropriate for the task I am doing? Which bugs are worth fixing? How to fix them (quick hack or deep rearchitecturing)? All of those questions are easy when you have nothing else to do, but usually that is not the case.

  • Like 1
  • Thanks 1

Share this post


Link to post
40 minutes ago, Bill Meyer said:

 Some years ago, I had a spreadsheet which the app took a few minutes to produce. In the end, it was not code rework, but replacement of some memory datasets which made a difference. Profiling showed that I might improve code execution by 10% or so, but changing to a more suitable component brought a speedup of over 20 times.

Thanks. Oh, don't get me wrong, I had my share of mistakenly going down the road of wrong path, and I sure will do again 😉

Unfortunately in most of my cases I can't just replace parts of the code with already made example of faster. Either it doesn't exist, or I don't know about it or it would take a big sum of money to hire an expert.

 

Not sure if you noticed some of my topics about optimization, micro-optimization... as comments show some are really triggered that I post such basic things as big 'revelations', for me is a day by day discovery of new things. A lot of them because I try to optimize wrong things, some of them proved to be real improvements.

Share this post


Link to post
1 hour ago, Mike Torrettinni said:

If you want to get away from the theory, I would love to hear about your last bottleneck, how you defined it.

You may have noticed my post "unwanted optimization". I knew about the narrowed place. But I didn't know how to do it. I thought that must be the case.

Share this post


Link to post
4 minutes ago, Stano said:

You may have noticed my post "unwanted optimization". I knew about the narrowed place. But I didn't know how to do it. I thought that must be the case.

I searched for it but can't find it. Can you point me to a comment, topic?

Share this post


Link to post

I'd rather describe it again.
I was dealing with a bulk write to a JSON file. Previously, I wrote down each item separately. It was unbearable. The acceleration was enormous. From a few seconds to "nothing".
All my forms have been closing for a very long time. 5 -> 10 sec. That's when I realized it was caused by an incorrect entry in JSON. I go through the whole form and write to JSON for the affected components. Now only a small adjustment in one place. Forms close immediately.
This is unwanted optimization. Maybe I could find profilers.

I only mentioned it for variety. I'm not a programmer. I'm just playing with him:classic_wink:

  • Thanks 1

Share this post


Link to post
3 hours ago, Dave Novo said:

The more difficult question is which bottleneck is worth investigating and fixing. For example, lets say you have some truly awful code in one part of the program that is causing something that should take 5 seconds to take 5 minutes. Really bad. But because of how your end users use the program, only 1% of people encounter this bottleneck. And those people all run that part of the program overnight anyhow, as part of a larger batch run. Is this even worth "looking into"? Maybe, if the "looking into" takes 10 minutes.  Maybe not if the "looking into" takes a day. That is a day you are taking away from adding features, or fixing bug that affect the other 99% of your customers.

I actually have similar situations quite often, and in topic using Pos before StringReplace I describe the benefit because of how many developers are affected. So, as you describe, not all optimization affect all users. But if it does some, why shouldn't I improve it?

 

I have a feature that exports/import data and every feature in project ads some execution time, and every year I optimize parts of it so that it stays around 20s. If I didn't optimize it would probably be 60s by now. The thing is that every time I profile it, I see something I can improve, that I didn't see the last time - because I have more experience now. So, is not just a problem of deciding what to optimize, is also what I know how to optimize.

 

 

Share this post


Link to post
2 minutes ago, Mike Torrettinni said:

Pos before StringReplace

Your conclusion in that topic is wrong. You can make huge algorithmic gains but you choose not to. I've no idea why not. 

Share this post


Link to post
4 minutes ago, Mike Torrettinni said:

But if it does some, why shouldn't I improve it?

If you're bored, do it:classic_smile:. Learning is a strong argument
You still don't listen to David:classic_angry: I personally don't know what he's talking about. The issue is beyond my comprehension:classic_blush:

  • Thanks 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×