Code optimisation

   1282   1   1
User Avatar
Member
192 posts
Joined: 4月 2015
Offline
I am unconsciously thinking of code optimisation lately when I am working in Houdini.
This is not a topic I am particularly familiar with. But I am starting to use the performance monitor a lot more and I want to learn more about it.

To the question of; "is it bad to unnecessarily assign variables to as much things as possible?", I seem to get the answer: No, it is bad to not do that!

I did not expect this.

Is it useful to learn why it is this way and about these things in general? Are there language specific differences? What is a good way to learn about this? Is there an Houdini-optimised way to learn about this?

Thanks for any advice. :-)
Edited by OdFotan - 2022年12月21日 06:25:56

Attachments:
performancetest_variableUse.hiplc (164.9 KB)
Screenshot 2022-12-21 at 12.22.43.png (458.0 KB)

User Avatar
Member
48 posts
Joined: 8月 2017
Offline
your question could only really make sense when there is context about the kind of goal you're trying to achieve.

every single operation you will do will cost something, but you do it because it has a purpose in your code.
what would be the point in testing if assigning 5, 15, 30 variable make the code slower when these variable won't have any purpose for the algorithm ? it evident they will have some form of cost for the performance, as insignifiant they might be.
but when creating any algorithm, you create these variable because you need them, they could be used for the core of the algorithm, for cache optimisation, for checking something. you use them because they are necessary, and therefor their 'cost' in performance are meaningless.

the real value you'll get on performance will always be with the algorithm itself.
for exemple, avoid using division when you can use a multiplication, division are extremely costly to a computer, if you need to half a value, simply multiply it by 0.5 rather than / 2
sometime, you might create an algorithm which by itself might be very slow, but you'd find out that adding a few additional variable which act as a temporary cache could make that same algorithm 1000x faster simply by avoid recomputing data you already computed before, and stored them into these cache.

think about if your task can be run in parallel or can only be run linearly, a point wrangle will dispatch most of the computing between multiple wrangle, where as a detail wrangle will only be able to run on a single core.
on that matter, here is an exemple I ran recently :

I had an operation that could run in point wrangle, however, a good chunk of the vex code was about rebuilding a very expensive array for every point. when you have 10000 point that all have to run the same operation that result in identical data, you're basically wasting insane amount of performance, the trick ?
I ran a detail wrangle prior to the point wrangle which was only calculating that array once, put this array in a detail attribute, and then simply had to reference it to my point wrangle resulting in about 100x increase in performance.
I traded a bit of memory usage for performance.


programming is like art, you get better with experience, you discover trick to do thing better and faster, it good to think about optimisation, but what matter first is that what you want to achieve work, there isn't really a 'right and wrong', simply better or worse way of doing thing.
programming is very iterative when experimenting, try to make something that work first, then try to make that same something work faster next.
  • Quick Links