XPU and memory usage, out of core

   9569   17   3
User Avatar
Member
5 posts
Joined: 10月 2014
Offline
XPU is fantastic, pretty easy to get addicted to the speed. I have noticed lots of out of memory issues on my 12GB 3080Ti. This is with scenes showing usage of under 5GB. So my assumptions are that I am limited to the available memory on the card (most of which is being used by windows and other apps unfortunately), and that XPU does not do out of core rendering.

Is out of core a planned feature or should I start looking at a secondary, dedicated 3090 with oodles of vram that will always be free?
Edited by Simonv - 2021年12月1日 20:01:31
User Avatar
Member
5 posts
Joined: 10月 2014
Offline
Did a little more exploring. Make sure you hide your karma viewport before sending a render to mplay, this hold true for both render to mplay and render to bg. The viewport retains it's memory footprint as long as it's visible.
User Avatar
Member
123 posts
Joined: 9月 2018
Offline
I recently tested XPU with this scene: https://twitter.com/NoahHaehnel/status/1458044379331973122?s=20 [twitter.com]

It took some time for the GPU to start but when it did it was fast. Without having any issues on my 8GBs RTX 3070. The memory metadata reached above 80GB although I'm not sure if this means truly 80Gbs being processed.
The grains are actual geo but instances of course.

Can you show a sample of the scene that caused these issues? IT would be interesting to see what amount is too much.
User Avatar
Member
5 posts
Joined: 10月 2014
Offline
No_ha
I recently tested XPU with this scene: https://twitter.com/NoahHaehnel/status/1458044379331973122?s=20 [twitter.com]

It took some time for the GPU to start but when it did it was fast. Without having any issues on my 8GBs RTX 3070. The memory metadata reached above 80GB although I'm not sure if this means truly 80Gbs being processed.
The grains are actual geo but instances of course.

Can you show a sample of the scene that caused these issues? IT would be interesting to see what amount is too much.

It seems like it has issues loading up singularly large objects. The first scene that wouldn't load only had one large vdb in it, the largest cloud vdb that Disney released for testing

3GB file - [assets.disneyanimation.com]

Also had some issues with a large array of copied curves, but only a few million points. I'll have to do some more granular testing rfe anything I find.
User Avatar
スタッフ
530 posts
Joined: 5月 2019
Offline
Simonv
Is out of core a planned feature...?

Yes we have it planned, but it is a while away, and will most likely come in stages over the next year or two.
eg
1. demand-loaded textures (meaning unused tiles or hi-res miplevels will not get loaded)
2. out-of-core textures (ie eviction of old textures when memory hits maximum)
3. out-of-core geometry

The problem with out-of-core is that its easy to reach a situation where you're spending more time paging textures/geometry on-off the GPU than you are actually rendering (ie thrashing). So its a tricky feature to get working correctly.
If you're needing to get work done in the short term I would suggest buying another GPU with oddles of memory, as you say.

Simonv
The viewport retains it's memory footprint as long as it's visible.
No_ha
The memory metadata reached above 80GB although I'm not sure if this means truly 80Gbs being processed.

Have you been using the memory stats in the viewport?
They give a fairly accurate reading for GPU memory usage.
Although we currently have viewport buffer memory combined in the "other" category with some other stuff... perhaps we should consider breaking that out to its own explicit category.
https://www.sidefx.com/docs/houdini/solaris/karma_xpu.html#howto [www.sidefx.com] ("Display device and render stats in the viewport" section)

Another thing to keep in mind is that once HoudiniGL has been activated, it will allocate GPU memory and keep hold of it (even when made inactive again).
So to further maximise your GPU memory I would suggest...
- saving your scene with KarmaXPU as the selected renderer
- restarting houdini
- loading the scene again, and render, without activating houdiniGL

cheers
User Avatar
Member
5 posts
Joined: 10月 2014
Offline
brians
Simonv
Is out of core a planned feature...?

Yes we have it planned, but it is a while away, and will most likely come in stages over the next year or two.
eg


cheers

Thanks for the reply Brian! Appreciate the info dump.
User Avatar
Member
94 posts
Joined:
Offline
brians
Another thing to keep in mind is that once HoudiniGL has been activated, it will allocate GPU memory and keep hold of it (even when made inactive again).
So to further maximise your GPU memory I would suggest...
- saving your scene with KarmaXPU as the selected renderer
- restarting houdini
- loading the scene again, and render, without activating houdiniGL
cheers

this trick working great but
tell me is there a way to inactive houdiniGL when i switch to karma? or releasing this memory when switch to karma?
(i mean do it without saving and restart houdini)?
Edited by habernir - 2022年7月27日 02:00:05
User Avatar
スタッフ
5204 posts
Joined: 7月 2005
Offline
It doesn't do this automatically (as GL can be called upon to do other rendering), but you can clear GL by displaying an empty LOP (like a SOP Create without anything in it), then switching the renderer to Karma XPU, and then displaying the LOP you want to render.
User Avatar
Member
94 posts
Joined:
Offline
malexander
It doesn't do this automatically (as GL can be called upon to do other rendering), but you can clear GL by displaying an empty LOP (like a SOP Create without anything in it), then switching the renderer to Karma XPU, and then displaying the LOP you want to render.
thanks for this advice
i already try it and the problem with this its that its doesn't release all the used vram soo in addition to that i clear it by use the cache manager and thats solve it.

it will be nice if sidefx team will add this very small feature to the render setting node by simple checkbox that will automatically release the vram when switching render .

and thanks alot.
User Avatar
Member
94 posts
Joined:
Offline
habernir
malexander
It doesn't do this automatically (as GL can be called upon to do other rendering), but you can clear GL by displaying an empty LOP (like a SOP Create without anything in it), then switching the renderer to Karma XPU, and then displaying the LOP you want to render.
thanks for this advice
i already try it and the problem with this its that its doesn't release all the used vram soo in addition to that i clear it by use the cache manager and thats solve it.

it will be nice if sidefx team will add this very small feature to the render setting node by simple checkbox that will automatically release the vram when switching render .

and thanks alot.
Just want to mention that it was in Ubuntu BUT when I try it in windows without this trick or without clean cache it's working perfectly.

Very strange behavior.
I hope someone can verify this issue.
User Avatar
Member
382 posts
Joined: 11月 2010
Offline
What about viewport background texture loaded into GPU memory? Having large sequences in the Background can easily take up all your GPU memory. It is a feature that I never understood the reason for since I am sitting on 128gb of RAM and I have to squeeze it into the GPU mem which I need for rendering.

When will this change? I suppose this will be helpful when working with Karma XPU as well as in general. It is an absurd feature and surely easier to fix than implementing out of core rendering....
Edited by OneBigTree - 2022年7月30日 12:27:12
User Avatar
Member
17 posts
Joined: 6月 2020
Offline
what is the minimum of vram needed for karma XPU?
for renderman XPU it's 11gb minimum, which is a lot.
User Avatar
Member
8041 posts
Joined: 9月 2011
Offline
biborax
what is the minimum of vram needed for karma XPU?
for renderman XPU it's 11gb minimum, which is a lot.

The minimum is scene dependent of course. You can render a sphere with no textures with very little vram
User Avatar
Member
17 posts
Joined: 6月 2020
Offline
jsmack
The minimum is scene dependent of course. You can render a sphere with no textures with very little vram

but how much for a flip sim, pyro, or fur??
if I put more than 1000 dollars for a powerfull graphics card, and I can't use it because it only has 8 gb or 12gb, it's ridiculous...
Financially I wonder about the usefulness…. Especially with the coming new CPU, like the new intel, 24 core, 32 threads for $600...
Edited by biborax - 2022年8月9日 03:34:08
User Avatar
Member
123 posts
Joined: 9月 2018
Offline
biborax
jsmack
The minimum is scene dependent of course. You can render a sphere with no textures with very little vram

but how much for a flip sim, pyro, or fur??
if I put more than 1000 dollars for a powerfull graphics card, and I can't use it because it only has 8 gb or 12gb, it's ridiculous...
Financially I wonder about the usefulness…. Especially with the coming new CPU, like the new intel, 24 core, 32 threads for $500...


Not every flip sim is created equally. There are too many variables to say you need x amount of VRAM. But sure, the bigger the better. But the good thing with XPU is that you can use your GPU until it runs out of VRAM. But your CPU will continue.
It's not either or. It's both until it isn't. BUT, a GPU will most likely decrease render times to a third or less.

So far, CPU only on XPU seems slightly slower than Karma CPU directly. Hopefully, that can be optimized in the future. I actually assumed it would be faster since Intel Embree is supposed to accelerate ray tracing.
But I believe in the near future unless you need VEX Shaders (or another feature that is only in CPU) you would render with XPU regardless of the scene because of the speed gains you'll get when the GPU renders, too.
Edited by No_ha - 2022年8月9日 03:47:17
User Avatar
Member
8041 posts
Joined: 9月 2011
Offline
biborax
jsmack
The minimum is scene dependent of course. You can render a sphere with no textures with very little vram

but how much for a flip sim, pyro, or fur??
if I put more than 1000 dollars for a powerfull graphics card, and I can't use it because it only has 8 gb or 12gb, it's ridiculous...
Financially I wonder about the usefulness…. Especially with the coming new CPU, like the new intel, 24 core, 32 threads for $600...

I wouldn't count on a gpu with that little memory for anything but toy simulations and renders. Good news though, GPUs with that much memory are selling for half of what they were a few months ago. 3090's are going for 1000 and 3090 Ti for 1150 now.

In production, I've pretty much never used gpu simulations for anything--the render farm doesn't even have them and we only have one or two high end graphics cards at the whole studio.

No_ha
So far, CPU only on XPU seems slightly slower than Karma CPU directly. Hopefully, that can be optimized in the future. I actually assumed it would be faster since Intel Embree is supposed to accelerate ray tracing.
But I believe in the near future unless you need VEX Shaders (or another feature that is only in CPU) you would render with XPU regardless of the scene because of the speed gains you'll get when the GPU renders, too.

It used to be faster with XPU Embree than Karma CPU, given the same sample count. The difference between CPU and GPU is so much so that the GPU device is 20x faster. It depends on the scene of course.
Edited by jsmack - 2022年8月9日 12:32:45
User Avatar
スタッフ
530 posts
Joined: 5月 2019
Offline
No_ha
So far, CPU only on XPU seems slightly slower than Karma CPU directly. Hopefully, that can be optimized in the future. I actually assumed it would be faster since Intel Embree is supposed to accelerate ray tracing.

Both KarmaCPU and KarmaXPU use Embree for ray-tracing on the CPU.

jsmack
It used to be faster with XPU Embree than Karma CPU, given the same sample count

KarmaCPU and KarmaXPU(cpu-device) are very different renderers, so speed will differ depending on scene, render settings, etc...
User Avatar
Member
61 posts
Joined: 12月 2014
Offline
Hi Brians, how is the "Out of Core" progressing with Karma XPU, seems like a lot of the other renderers have versions of it, if it was only to take care of the textures and not the meshes then that would be a tremendous help with the memory load.
T.
  • Quick Links