Daniel Zimmermann
rayfoundry
About Me
Connect
LOCATION
Not Specified
ウェブサイト
Houdini Engine
Availability
Not Specified
Recent Forum Posts
GLTF 2.0 file format 2018年6月15日3:19
+1
Submitted an RFE. Hope SideFX is not sleeping away this development.
Submitted an RFE. Hope SideFX is not sleeping away this development.
Manjaro Update Ruins Houdini again 2018年6月5日7:17
HongMao
Thanks, I went for re-naming both libfreetype and zlib, since zlib errors came up with only dealing with libfreetype…
Works great!
Houdini 16.5 on Linux (in the Cloud) 2018年4月26日8:00
Hi there,
<TLDR>Performance issues with a headless Houdini Linux install on Amazon EC2</TLDR>
I'm getting strange performance metrics from Houdini on Linux compared to Windows 10 which I cannot explain. I hope there's someone with experience running a similiar use-case to mine.
This is the situation:
Houdini 16.5.439 on Amazon Linux (Redhat EL/CentOS family) on a c5.18xlarge instance. The c5.18xlarge has 72 vCPUs (each vCPU = 1 Xeon hyperthread) and 137GB memory. This is (one of) the biggest instance type for compute intensive stuff on AWS and it's quite a monster in benchmarks.
I'm not running Houdini GUI on this machine, but hython, which I'm using to loads a HIP file and cook it (headless).
The thing is, I'm not rendering with Mantra, but cooking geometry files. This absolutely works for me and not part of the issue. The issue is the performance. I read so many good things about Houdini on Linux and expected it to at least as fast as Windows on the the same hardware.
The other machine I'm comparing to, is a single CPU Xeon E5 2620 v4 with 8 cores and 16 Hyperthreads, 64GB memory running Windows 10 Pro. From previous benchmarks, the performance should be about 50% of the C5.18xlarge for real world use-cases where you also have disk IO etc.
The thing is, that my Windows machine blows the Linux machine out of the water. It's up to 100% faster on certain tasks which are just CPU bound and involve no IO. An example is UVLayout which can run on multiple cores with 100% utilization.
There's no dedicated GPU in the cloud server and also no desktop installed except for the stuff hython expects (mesa-libGL, …). There have been strange performance issues with certain NVidia drivers for seemingly graphics unrelated actions such as cooking a node before, so that's currently my best guess but would also mean that running Houdini on a server without a GPU is not really meaningful?
Cheers,
Daniel
P.S.: The desktop machine just ran the job twice in the time I'm writing this and the cloud server is still crunching with 72 cores each at 100% and not finished yet.
<TLDR>Performance issues with a headless Houdini Linux install on Amazon EC2</TLDR>
I'm getting strange performance metrics from Houdini on Linux compared to Windows 10 which I cannot explain. I hope there's someone with experience running a similiar use-case to mine.
This is the situation:
Houdini 16.5.439 on Amazon Linux (Redhat EL/CentOS family) on a c5.18xlarge instance. The c5.18xlarge has 72 vCPUs (each vCPU = 1 Xeon hyperthread) and 137GB memory. This is (one of) the biggest instance type for compute intensive stuff on AWS and it's quite a monster in benchmarks.
I'm not running Houdini GUI on this machine, but hython, which I'm using to loads a HIP file and cook it (headless).
The thing is, I'm not rendering with Mantra, but cooking geometry files. This absolutely works for me and not part of the issue. The issue is the performance. I read so many good things about Houdini on Linux and expected it to at least as fast as Windows on the the same hardware.
The other machine I'm comparing to, is a single CPU Xeon E5 2620 v4 with 8 cores and 16 Hyperthreads, 64GB memory running Windows 10 Pro. From previous benchmarks, the performance should be about 50% of the C5.18xlarge for real world use-cases where you also have disk IO etc.
The thing is, that my Windows machine blows the Linux machine out of the water. It's up to 100% faster on certain tasks which are just CPU bound and involve no IO. An example is UVLayout which can run on multiple cores with 100% utilization.
There's no dedicated GPU in the cloud server and also no desktop installed except for the stuff hython expects (mesa-libGL, …). There have been strange performance issues with certain NVidia drivers for seemingly graphics unrelated actions such as cooking a node before, so that's currently my best guess but would also mean that running Houdini on a server without a GPU is not really meaningful?
Cheers,
Daniel
P.S.: The desktop machine just ran the job twice in the time I'm writing this and the cloud server is still crunching with 72 cores each at 100% and not finished yet.