tamte
unfortunately people spoiled by properly working solution in Mantra (and possibly Karma CPU) will not be easily satisfied by tedious workarounds, even though may be grateful if a workaround exists that may meet their needs
It is something we're aware of and working towards.
tamte
I thought that since engines like Unreal support Virtual Texture Streaming from disk that this is not an issue anymore for GPU and theoretically OGL and GPU renderers could all just stream pixels on demand the same way,
I'm not sure how Unreal are doing it, but there are two conceptual ways to do this
1) unified memory
https://developer.nvidia.com/blog/unified-memory-cuda-beginners/ [
developer.nvidia.com]
We really like the idea, but it comes with issues that currently make it unusable.
The main one is that we cannot access the memory on the CPU if the GPU is doing any work without the application being terminated by the driver. And this means any work at all...
So its impossible for us to use when we have an open system like Houdini with plugins that might also be making use of the GPU as well :/
It also means we'd need to double all our data in memory, so the CPU device could also run.
There are also quite a bit of performance issues when used with Optix.
We're talking to NVidia about potential improvements, but I don't think this issue will resolve itself anytime soon.
2) on-demand loading
https://on-demand.gputechconf.com/siggraph/2019/pdf/sig913-texture-paging-in-optix.pdf [
on-demand.gputechconf.com]
This is where we load the textures on demand, which is what NVidia actually recommends (over unified memory).
They even maintain an example library.
This is the one we plan to implement over the next one/two release cycles.
tamte
how does it know which UDIMs it will needs if it can't pre-evaluate st primvar?
I assume it will just load all that match the pattern even if they may not be necessary?
Yes, XPU currently pre-loads all the texture data.
So in this case it will pre-load ALL UDIMs, whether they're needed or not
Brian