Jump to content
Mike Torrettinni

Micro optimization - effect of defined and not used local variables

Recommended Posts

On 11/28/2020 at 10:15 PM, Stefan Glienke said:

And on win32 those try/finally have a significant effect even worse than a heap allocation at times because they completely trash a part of the CPUs branch prediction mechanism - see RSP-27375

Especially tricky to optimize are hidden managed variables. Delphi creates those when it needs to store intermediate results of managed types [as in tstringlist.add (format('Test %d',[123]))]. 

 

 

It would be great if there were developer tools that would point out the creation of such hidden variables.

Share this post


Link to post
1 hour ago, A.M. Hoornweg said:

[as in tstringlist.add (format('Test %d',[123]))].

It would be great if there were developer tools that would point out the creation of such hidden variables.

It's not that hard to spot. Why would we need a tool for that?

Share this post


Link to post
59 minutes ago, Anders Melander said:

It's not that hard to spot. Why would we need a tool for that?

I mean a tool to analyze large existing projects with many units. Maybe something like TMS Fixinsight Pro.

Share this post


Link to post
Just now, A.M. Hoornweg said:

I mean a tool to analyze large existing projects with many units. Maybe something like TMS Fixinsight Pro.

Yes, I understood that.

My point is that searching for this pattern, which is a perfectly normal and valid pattern, would be premature optimization. If there is a performance problem in an application then you analyze the application (for example with a profiler) and locate the hot spots. It is then then easy to identify this pattern in those hot spots just by reading the code. You don't need a tool for that.

 

It's like having a tool for identifying loops because loops are slower than no loops.

  • Like 1

Share this post


Link to post
On 11/30/2020 at 11:30 AM, Anders Melander said:

My point is that searching for this pattern, which is a perfectly normal and valid pattern, would be premature optimization. If there is a performance problem in an application then you analyze the application (for example with a profiler) and locate the hot spots. It is then then easy to identify this pattern in those hot spots just by reading the code. You don't need a tool for that.

I disagree - this is exactly the mindset that we got trained all these years because we did not know any better - the world moved on - heck there are people working on programming tools based on ML so it can suggest refactorings based on refactorings you have done in the past! And yet here we are mostly doing yolo driven development - "if it aint break it might be ok" (ok, I am exaggerating here).

 

If the tooling can point out possible optimizations because they understand what you are doing that can just be good regardless of how much of a measurable improvement that will make.

And if its just for some junior coders at Embarcadero slapping together some ... ahem ... non ideal code that never gets properly reviewed because lack of time.

It took them years and an actual change of the FreeAndNil function to find bugs in their code that static code analysis could have found ages ago.

  • Like 4

Share this post


Link to post
9 minutes ago, Stefan Glienke said:

- the world moved on -

Sadly we don't move.

Quote

"if it aint break it might be ok" (ok, I am exaggerating here).

The fear of compatibility issues and breaking change is alone an evil ! An example : the exit routine (Delphi) was implemented as a function-like instead of a true keyword just to make some lazy developer happy ! I know ton of tools/compiler that break compatibility for good(LLVM did this many many time; JavaScript did as well and changed completely some of its core logic, Perl, ...).  Sometimes breaking change is a must have and would be much good for the long run. 

Share this post


Link to post
23 minutes ago, Stefan Glienke said:

I disagree

I'm not sure what it is you disagree with.

Optimizing code when there is no need to optimize it is by definition premature optimization.

 

2 minutes ago, Stefan Glienke said:

If the tooling can point out possible optimizations because they understand what you are doing that can just be good regardless of how much of a measurable improvement that will make.

The tool might be able to spot where code can be optimized but it will not be able to spot where that optimization is relevant.

 

For the vast majority of the code I write I make an effort to make the code as readable and verbose as possible, at the cost of performance, because the possible gain of optimization is simply irrelevant. However when I know the code I'm writing is performance critical then I pay attention to what the compiler will do, alignment, loops, implicit finalization, etc., but that's the exception. Of course there are patterns that I have leaned to use regardless of the code being performance sensitive or not (e.g. pass managed types by const).

In the cases where I get it wrong, or circumstances changes, then a profiler will tell me exactly where to focus my efforts and, as I said, then it's trivial to correct.

 

I'm not against tools that can spot generic problems but a tool that identifies all cases of function returning managed type, passed as parameter, would be worthless to me. There are better ways of solving that "problem".

Share this post


Link to post
6 minutes ago, Mahdi Safsafi said:

The fear of compatibility issues and breaking change is alone an evil !

Easy to say when you have already enjoyed the benefit of that backward compatibility.

Share this post


Link to post
4 minutes ago, Anders Melander said:

Easy to say when you have already enjoyed the benefit of that backward compatibility.

Who didn't ? Hey why can't have both ? An example Python 2 and Python 3 or sitting up a deadline for an old feature, ...

My point is that if breaking change is good at the long run ... then we should adopt it rather than keeping the ugly one.

Share this post


Link to post
18 minutes ago, Anders Melander said:

Optimizing code when there is no need to optimize it is by definition premature optimization.

What if we could write optimized code right from the start and would not have to deal with all that shit because the compiler has the intelligence of a rock.

What if good coding practices could be tought by the editor via suggesting things (look at quick actions in Visual Studio that will help you with many different things - from fixing formatting to suggesting some refactoring)

17 minutes ago, Anders Melander said:

Easy to say when you have already enjoyed the benefit of that backward compatibility.

Much backwards compatibility is eyewash and simply means: "we did not change the signature but sacrificed some firstborn to make it still work".

If you provide - there is it again - tooling to detect and guide you with moving forward (yes, often backwards compatibility is nice because I don't have to ifdef my code for a dozen different versions) then breaking changes are not bad.

  • Like 3

Share this post


Link to post
1 minute ago, Mahdi Safsafi said:

My point is that if breaking change is good at the long run ... then we should adopt it rather than keeping the ugly one.

Agree. It's a balance, as I'm sure Embarcadero knows and takes into account.

 

Like all evolution, too much change leads to extinction. Too much stability leads to stagnation and obsolescence.

Share this post


Link to post
2 minutes ago, Stefan Glienke said:

What if we could write optimized code right from the start and would not have to deal with all that shit because the compiler has the intelligence of a rock.

What if good coding practices could be tought by the editor via suggesting things (look at quick actions in Visual Studio that will help you with many different things - from fixing formatting to suggesting some refactoring)

You'll get no complaint from me on that.

251986219_takemymoney.thumb.jpg.24914fa74fc19826bbb67eed75a802e0.jpg

 

 

  • Like 1
  • Haha 3

Share this post


Link to post
On 11/25/2020 at 6:37 PM, Kas Ob. said:

I forgot to mention when it is not ugly but appreciated for speed with minimal cosmetic effect.

When your function is a method in a class/record then move these local managed types vars to be private fields even when each one of them is not used outside one method, here you can recycle them, just remember they are initialized with some value from previous usage so be careful, like don't assume a string being used like that an empty one.

This is the kind of hint that must be prefixed with Use with care and only if you absolutely need this! warning.

While this could add some speed, this also reduces encapsulation, making testing more hard. When local variables really kill performance, it's better to use var parameters.

 

On 11/30/2020 at 11:02 AM, A.M. Hoornweg said:

Especially tricky to optimize are hidden managed variables. Delphi creates those when it needs to store intermediate results of managed types [as in tstringlist.add (format('Test %d',[123]))]. 

 

 

It would be great if there were developer tools that would point out the creation of such hidden variables.

Would be nice indeed. It took plenty of time for me to discover that any string concatenation or a routine returning a temporary string causes compiler to generate hidden try-finally block.

 

 

On the subject: my personal rule is to just keep in mind that local managed variables add overhead. If a routine contains a string operation just for rare error report, don't use concatenation or Format but Exception.CreateFmt instead, or a nested routine. All arguments of record or managed types must be used by reference (const/out/var). But don't bother any more if a routine requires several strings or something non-optimal etc until it causes noticeable slowdown. I really do string concats in a loop and other performance-wise awful things to keep code short and simple when a function is executed relatively rarely.

Edited by Fr0sT.Brutal
  • Thanks 1

Share this post


Link to post
On 12/17/2020 at 9:23 AM, Fr0sT.Brutal said:

On the subject: my personal rule is to just keep in mind that local managed variables add overhead. If a routine contains a string operation just for rare error report, don't use concatenation or Format but Exception.CreateFmt instead, or a nested routine. All arguments of record or managed types must be used by reference (const/out/var). But don't bother any more if a routine requires several strings or something non-optimal etc until it causes noticeable slowdown. I really do string concats in a loop and other performance-wise awful things to keep code short and simple when a function is executed relatively rarely.

A bad scenario in my code reminded me of this thread, so I re-read it to be sure I'm not missing out anything important. Very useful conclusion!

 

I had an example of some old string manipulation method which I replaced with better one, and I set it up like this - not sure why I used this wrong approach:
 

procedure Work(var aStr:string);
var vTmp1, vTmp2: string;
begin
  WorkBetter(aStr);
  Exit;

  // here was old stuff that handled string slower than in WorkBetter
  ...
end;

When looking at profiling results and the code I was sure this method can't be result of any performance bottle-neck, because it doesn't even touch the slow code! Well, of course I was wrong because it still handles 2 local string vars.

 

Thanks again!

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×