oldteapot7 IMO its waste of time and programmers effort and human resources in SESI.
I'm not sure SideFX are actually putting that much time into it? I would love it if they made the M-series a higher priority, I don't think ARM even got a mention during the H20 launch keynote.
I prefer using Houdini on Apple Silicon - I really only use a PC for lookdev, specifically RTX rendering performance. ARM chips are plenty powerful enough for 3D content creation nowadays.
Imagine if developers said why should I develop for Intel chips back in the day. I'm sticking with the 68000, that's good enough.
Chips are getting smaller, getting faster and it is a progression. I don't think it's realistic that a laptop can outperform a dedicated workstation just yet. They will always be behind, but it's quite phenomenal that they are becoming so powerful after 2 generations IMO.
The M2 is as powerful as my 1950x that I type on. I can afford to run a sim for 20 mins or so on the 1950x, so I could afford a couple of mins extra on a laptop that can be taken anywhere. That's phenomenal IMO.
Rule of thumb, don't need it, don't buy it. But it may just enable more options if you need it.
oldteapot7 IMO its waste of time and programmers effort and human resources in SESI.
People fall too much in Apple marketing and propaganda...
The type of 3D we usually do in Houdini with heavy simulations, instancing, rendering etc. isn`t what most or anyone does on laptops - yes, it`s ridiculous to use to much man power on such a tiny group of users.
Sygnum The type of 3D we usually do in Houdini with heavy simulations, instancing, rendering etc. isn`t what most or anyone does on laptops - ridiculous.
Theory Accelerated have done a brilliant job of optimising simulations on Mac GPU's using the Metal framework as a plugin for Houdini. That's the kind of innovation I'd like to see within Houdini itself. Axiom Solver can even run on iPad:
Sygnum isn`t what most or anyone does on laptops - yes, it`s ridiculous to use to much man power on such a tiny group of users.
think of it that way - the amount of people who actually have macbook pros, mac studio max/ultra is in millions, but rendering happens on octane, redshift, and cycles - simply because of the speed. why say no to those people?
btw, Houdini is not only about simulating super large vfx. I've spent lots of time doing some moghraphey style ads like toasters flying around and letters looking cool - typical C4D bread and butter. You dont need infinite power there. (also i'd wager M3max would outperform gen1 and gen2 threadrippers, so they are not exactly "toy" machines now are they)
oldteapot7 So from definition it cant be powerfull
oldteapot7 but still its only ARM processor
I am sorry but to call out people for falling for Apple propaganda but clearly never having bothering to take a objective look at what Apple Silicon actually offers is quite a feat.
Wanted to jump in here and say that as a long time Mac user and using Houdini for 10 years, I would be super excited to see a native renderer (XPU) that took full use of the M(x) GPU. It's something that Redshift has offered for a few years now and it's fast and because of that I upgraded to the M1 Studio mac (as opposed going PC). I know Mac is a smaller market for CGI but when it works it really does work. The problem always seems to be that Mac falls behind on development schedule and sometimes off the cliff all together. Also, I have to say it - as iPhone is getting flack for monopolizing cell phone use, I often resent 'only works on NVIDIA hardware'. Going out on a limb here but I would drop my redshift subscription in a heartbeat if I could render in XPU at the same speed or better.
For what it's worth, I've noticed that sims can be computed extremely fast on my M3 Max MacBook Pro, often nearly as fast as my AMD 64 core threadripper. I wonder what an M4 Max will be able to accomplish with regards to grinding through sims -- now if we could just get Metal-based GPU Karma on Mac!