I'm specially interested in "load memory"
I'm aware of the fact, that mPlay will use RAM not my ssd till the render (till my RAM cap) but I'm still confused about the number, as:
Same scene, same material (I turned off subdiv geo at SOP import and let alone subdiv modifier in Blender, to not mess up the test with dicing, that could play with different numbers between the two engine)
- Blender Cycles showed me 670 MB used Vram
- Houdini Karma XPU (rendered to mPlay) Showed me 1.9 GB "load memory"
Karma converted my image data to .rat, that turned out bigger as my originally used .jpg files (I had been using just color map for this test, 32× 1440 UDIM) but not even the larger .rat files answer the memory load difference (rat files are giving less than 200mb vs the original jpg's)
It can be an important factor when we would like to optimize out scene to let it fit into Vram.
Thanks in advance for the answers, sorry for the doble post, did not wanted to edit the last one.
Found 10 posts.
Search results Show results as topic list.
Solaris and Karma » Question due to reading out Karma render stats
- Polybud
- 10 posts
- Offline
Houdini Lounge » What do you guys think about sora?
- Polybud
- 10 posts
- Offline
Since we are in a Houdini forum and since the topic is about sora, I hope for a meaningful upgrade due to the new quad retopo tool
Jokes and irony aside, it is plane surreal, that tools like Sora starts to spread and in the other side of the "how to" I'm moving vertices one-by-one for having a proper subdiv mesh - simply surreal.
I do not care much about Sora, I'm an artist, not a "generated guess work" supervisor, but what I care about a lot: we definitely need UV unwrap, layout, retopo, skeleton/rigbone generator and rig transfer tools that exploiting the new tech in a proper way.
Do not missunderstood me, I'm not dissatisfied with the actual stage of Houdini, SideFX offers more tools than any other 3D dcc software developer, but next to Sora and apps like that IF we do not reach tools that could shorten labour works as retopo, bone creating, rigging, UV'ing... not by /10 but to zero, in that case we are in trouble.
Jokes and irony aside, it is plane surreal, that tools like Sora starts to spread and in the other side of the "how to" I'm moving vertices one-by-one for having a proper subdiv mesh - simply surreal.
I do not care much about Sora, I'm an artist, not a "generated guess work" supervisor, but what I care about a lot: we definitely need UV unwrap, layout, retopo, skeleton/rigbone generator and rig transfer tools that exploiting the new tech in a proper way.
Do not missunderstood me, I'm not dissatisfied with the actual stage of Houdini, SideFX offers more tools than any other 3D dcc software developer, but next to Sora and apps like that IF we do not reach tools that could shorten labour works as retopo, bone creating, rigging, UV'ing... not by /10 but to zero, in that case we are in trouble.
Solaris and Karma » Question due to reading out Karma render stats
- Polybud
- 10 posts
- Offline
Good day everyone!
For some it could be a banal topic, but I'm a bit confused about Karma render stats. Specially about Vram usage.
-Is "load memory" line that is fully accountable due to total Vram usage / requirement (if I would like to optimize my scene as not running out of Vram)
-Is "memory load" line shows all (if Vram) used memory, counting in stack openGL cache, view port memory usage, or it shows just render to disk OR render to mplay OR viewport render action? AKA: Could it be, that "memory" load shows a number within my GPU VRAM amount, but I'm still out of core due to openGL cache / opened viewport OS, background aps..?
- How we can clearly reading out the Actual Vram usage and optimizing our scene to let it fill comfortable in to our Vram amount?
And a side, note Vram, but system ram related question:
If I see numbers as: "450 GB total memory used" and since Mplay is rendering to system RAM, it means that my 128 GB system ram was not enough for the sequence AKA, I had been slowing down Mplay to the level of "render to disk" and I'm actually rendering to my disk by using Mplay?
Thanks in advance for your help!
For some it could be a banal topic, but I'm a bit confused about Karma render stats. Specially about Vram usage.
-Is "load memory" line that is fully accountable due to total Vram usage / requirement (if I would like to optimize my scene as not running out of Vram)
-Is "memory load" line shows all (if Vram) used memory, counting in stack openGL cache, view port memory usage, or it shows just render to disk OR render to mplay OR viewport render action? AKA: Could it be, that "memory" load shows a number within my GPU VRAM amount, but I'm still out of core due to openGL cache / opened viewport OS, background aps..?
- How we can clearly reading out the Actual Vram usage and optimizing our scene to let it fill comfortable in to our Vram amount?
And a side, note Vram, but system ram related question:
If I see numbers as: "450 GB total memory used" and since Mplay is rendering to system RAM, it means that my 128 GB system ram was not enough for the sequence AKA, I had been slowing down Mplay to the level of "render to disk" and I'm actually rendering to my disk by using Mplay?
Thanks in advance for your help!
Technical Discussion » Displacement and normal map from 3D geometry
- Polybud
- 10 posts
- Offline
side note: I took a look on the coin inside sketchfab and it has a bunch of small islands due to it's UV, I would suggest to re UV'ing it.
Technical Discussion » Displacement and normal map from 3D geometry
- Polybud
- 10 posts
- Offline
If all, as decimation + bake should be happened inside Houdini:
- polyreduce the high resolution mesh (polyreduce node)
- you can generate new UVs for the decimated mesh by automated way: (autoUV node) or manualy: (uV project node + flatten and cut the seams in flatten manually)
- (alternative) you can keep the original UVs by using a transfer UVs node
- use LABs simple baker or LABs map baker to bake the maps from highpoly to low poly (Houdini's baker could offer better result in some cases, if the UVs are the same in the high and low poly version)
- now you have your PBR maps, that can be connected inside your material library.
- polyreduce the high resolution mesh (polyreduce node)
- you can generate new UVs for the decimated mesh by automated way: (autoUV node) or manualy: (uV project node + flatten and cut the seams in flatten manually)
- (alternative) you can keep the original UVs by using a transfer UVs node
- use LABs simple baker or LABs map baker to bake the maps from highpoly to low poly (Houdini's baker could offer better result in some cases, if the UVs are the same in the high and low poly version)
- now you have your PBR maps, that can be connected inside your material library.
Technical Discussion » Karma XPU - dual 4090 RTX setup - performance issues
- Polybud
- 10 posts
- Offline
GnomeToys
If it works like Radeon ProRender only the primary card (an RTX4090 in my case, secondary is a 7900XTX, and yes, surprisingly that actually works) was allowed to do work on samples above the minimum since the adaptive samples require on the spot decisions based on rays / photon casts from the rest of the scene depending on how it's being done whereas the non-adaptive samples only require the data resulting from those which isn't really needed until all the samples are mixed together (so probably doesn't need to be transfered to the other card at all). It's too slow to be beneficial without something like NVLink which NVidia conveniently killed off on everything below the $8500 L40 in Ada cards, would be my guess. That or that's simply as much work as can be offloaded from CPU. The 4090 only has 256 total fp64 cores out of the 16000 something cuda cores and they're spread out amongst all SMs which makes it impossible to get any kind of cache locality working with them so anything that needs higher than fp32 precision is probably ending up on the CPU.
Referring to the above quote, could anyone help me out with the info, as where and under what circumstances Karma XPU is using higher than fp32 precision? Sorry, maybe its basic knowledge, but I'm a bit lost due to this layer
Thanks in advance if anyone can drop some info!
Technical Discussion » order of referencing objects
- Polybud
- 10 posts
- Offline
Do we have a way to invert the order of newly added reference fields to nodes, as for example: material library, object merge...
I mean here: the added new line is always goes to the bottom of the list, so I constantly have to scroll down, beside just clicking "+" and immediately placing the ref goe, shader, group or what ever to the top appearing newly added reference line.
I tried to find a way to reverse the order, but I could not. Did I missed something or at the moment this is how it works?
Thanks in advance for the answer!
I mean here: the added new line is always goes to the bottom of the list, so I constantly have to scroll down, beside just clicking "+" and immediately placing the ref goe, shader, group or what ever to the top appearing newly added reference line.
I tried to find a way to reverse the order, but I could not. Did I missed something or at the moment this is how it works?
Thanks in advance for the answer!
Solaris and Karma » OCIO transform
- Polybud
- 10 posts
- Offline
I tried to look for an answer, pardon me if I missed it.
It seems as OCIO transform node do not works on viewport (solaris/karma) level.
I know that I can set up OCIO filter in Karma node under filter tab to convert some paths, but it influence just the render to disk final result.
I also aware that I can drop a 8bit difuse to MTLX network and call it a day, but sometimes I love to keep all my texture library inside a project in EXR.
Did I missed a node? ; image/OCIOtransform/MTlx standard surface/collect (on diffuse level)
Thanks in advance if anyone can help me out with some information.
It seems as OCIO transform node do not works on viewport (solaris/karma) level.
I know that I can set up OCIO filter in Karma node under filter tab to convert some paths, but it influence just the render to disk final result.
I also aware that I can drop a 8bit difuse to MTLX network and call it a day, but sometimes I love to keep all my texture library inside a project in EXR.
Did I missed a node? ; image/OCIOtransform/MTlx standard surface/collect (on diffuse level)
Thanks in advance if anyone can help me out with some information.
Houdini Lounge » SideFX LABS exterior driven/subscription nods question pack
- Polybud
- 10 posts
- Offline
Houdini Lounge » SideFX LABS exterior driven/subscription nods question pack
- Polybud
- 10 posts
- Offline
Good day everyone!
Do we have any information on Reality Capture Labs node pack? Since Houdini is semi part of Epic ecosystem and Reality Capture is fully integrated into Epic's ecosystem, can we except some future update due to RC node pack inside Labs?
What we can except due to GoZ, now that Maxon stepped in?
What is the case with Exoside quad remesher? I can not install/update it by using Lab's node (the download process seems to take forever and I can not interrupt it without crashing Houdini)Downloading it from official Exoside website also impossible.
Thanks in advance if anyone from Labs team can update us!
Do we have any information on Reality Capture Labs node pack? Since Houdini is semi part of Epic ecosystem and Reality Capture is fully integrated into Epic's ecosystem, can we except some future update due to RC node pack inside Labs?
What we can except due to GoZ, now that Maxon stepped in?
What is the case with Exoside quad remesher? I can not install/update it by using Lab's node (the download process seems to take forever and I can not interrupt it without crashing Houdini)Downloading it from official Exoside website also impossible.
Thanks in advance if anyone from Labs team can update us!
-
- Quick Links