P V
PVK
About Me
Connect
LOCATION
Not Specified
WEBSITE
Houdini Skills
Availability
Not Specified
Recent Forum Posts
Iron cores and hatred, I want an opinion. Aug. 30, 2021, 9:41 a.m.
Hello. I am interested in your opinion about the Houdini system requirements.
I have heard that Houdini makes good use of multithreading in processors, and open cl can also speed up many simulations using a video card, but I began to doubt this very much.
The fact is that for 8 years I have been dreaming of building a computer that would cope well with simulations. But updating the system every time, I do not notice any increase in this regard at all.
I understand that there is no computer for calculating simulations in real time and you can't do without caching, but I would like to at least double the speed of calculations.
Let's say I had a computer:
Intel i7 8700k (4.7 hz, 6 cores)
GTX 1060ti (6gb)
16gb ram (3200mg)
hdd
I created several dozen scenes using different simulations for tests and started updating the components.
To begin with, I bought a video card, since it is needed for rendering and many other tasks – RTX 3080. I tested it, the tests showed that there is no gain from it at all in the simulations. Although open cl. The result is no different from the gtx 1060ti.
I started looking for a bottleneck.
Next, I updated the hhd, on an ssd m2, (its speed is 10 times higher). I expected that when reading from the disk, the simulations would load faster. In fact, nothing has changed again.
Added 64GB of RAM. I expected that at least this would help in heavy scenes. In very severe cases, this added an increase of 10-15% at best.
Then I changed the processor to Ryzen 9 5950x(4.6 hz, 12cores)(currently the top)
Here I already wanted to see all the beauty of 12 cores. But after running through the tests, I noticed only an increase in some types of simulations by 30%, and in some, to my surprise (like calculating pure code on VEX), on the contrary, FPS fell by 53%, that is, in the dry balance, I received almost no increase.
As a result, I updated the computer for $ 4083. And in the best case, I received an increase of 30 percent, and in some calculations, it began to work even worse
Of course, Redshift now renders perfectly, but damn, what kind of computer is needed for these simulations? Is it really possible that only supercomputers can cope with this and the creation of effects is still the fate of only large companies? Tell me, what is wrong with this assembly? And which specific part is overwhelmingly the most important for simulations? Where is the bottleneck?
Ryzen 9 5950x (4.6Hz 16 core)
GTX 3080
64gb ram(3200mg)
SSD 1ТБ(Samsung 970 EVO)
Did I correctly draw the conclusions that everything is counted on one core, so the number of cores in general almost does not play a role? And how to deal with this in an era where everyone is chasing the number of cores, and the ghz of the last 10 years has hardly grown at all?
I have heard that Houdini makes good use of multithreading in processors, and open cl can also speed up many simulations using a video card, but I began to doubt this very much.
The fact is that for 8 years I have been dreaming of building a computer that would cope well with simulations. But updating the system every time, I do not notice any increase in this regard at all.
I understand that there is no computer for calculating simulations in real time and you can't do without caching, but I would like to at least double the speed of calculations.
Let's say I had a computer:
Intel i7 8700k (4.7 hz, 6 cores)
GTX 1060ti (6gb)
16gb ram (3200mg)
hdd
I created several dozen scenes using different simulations for tests and started updating the components.
To begin with, I bought a video card, since it is needed for rendering and many other tasks – RTX 3080. I tested it, the tests showed that there is no gain from it at all in the simulations. Although open cl. The result is no different from the gtx 1060ti.
I started looking for a bottleneck.
Next, I updated the hhd, on an ssd m2, (its speed is 10 times higher). I expected that when reading from the disk, the simulations would load faster. In fact, nothing has changed again.
Added 64GB of RAM. I expected that at least this would help in heavy scenes. In very severe cases, this added an increase of 10-15% at best.
Then I changed the processor to Ryzen 9 5950x(4.6 hz, 12cores)(currently the top)
Here I already wanted to see all the beauty of 12 cores. But after running through the tests, I noticed only an increase in some types of simulations by 30%, and in some, to my surprise (like calculating pure code on VEX), on the contrary, FPS fell by 53%, that is, in the dry balance, I received almost no increase.
As a result, I updated the computer for $ 4083. And in the best case, I received an increase of 30 percent, and in some calculations, it began to work even worse
Of course, Redshift now renders perfectly, but damn, what kind of computer is needed for these simulations? Is it really possible that only supercomputers can cope with this and the creation of effects is still the fate of only large companies? Tell me, what is wrong with this assembly? And which specific part is overwhelmingly the most important for simulations? Where is the bottleneck?
Ryzen 9 5950x (4.6Hz 16 core)
GTX 3080
64gb ram(3200mg)
SSD 1ТБ(Samsung 970 EVO)
Did I correctly draw the conclusions that everything is counted on one core, so the number of cores in general almost does not play a role? And how to deal with this in an era where everyone is chasing the number of cores, and the ghz of the last 10 years has hardly grown at all?