Recently I am creating a renderer and trying to plug it into Houdini’s Solaris workspace via Hydra. It basically works, however I have a lot of problems integrating the material.
Currently I am reading all static properties of the built-in principle material using hydra’s HdMaterialNetwork interface. It works because those properties are properly exported to USD. However when it comes to node-based graphs, I get totally lost as I couldn’t find a way to fetch the connection information. I saw for some renderers’ hydra plugin, it comes with a corresponding NDR plugin. Does this relate to node graph data fetching?
There is actually another problem. When the properties of materials are static (without Houdini expressions like $F etc.) I can get correct result, however when I put $F, or even more complex expression (like ${F4%4}), the result becomes wrong, it feels that it’s not computed properly. I saw Karma can use correct result (for example per-frame texture), therefore I believe there exists alternative ways to get the properties, but even after reading the source code and tried many times I still can’t get any clue.
Could someone give hint on these problems? Thank you!
Writing correct expression result into USD
3032 7 1- swfly
- Member
- 7 posts
- Joined: May 2020
- Offline
- mtucker
- Staff
- 4525 posts
- Joined: July 2005
- Offline
USD is designed to allow a "Material" to encapsulate many renderer-specific "shaders". A shader consists of one or more connected UsdShade primitives. All UsdShade primitives have the same USD primitive type (UsdShadeShader), but each primitive also has a "source" which indicates the true nature of that specific primitive. UsdShadShader "sources" are discovered at runtime through ndr and sdr plugins. The USD spec includes a definition for USD Preview Surface shaders, and renderers are generally expected to understand and be able to interpret shader graphs authored using UsdShadeShaders prims of these "types". But most renderers also provide ndr and sdr plugins to describe their own UsdShadeShader sources. I don't know if your renderer has its own shader language or if you just want to use the UsdPreviewSurface specification. This decision will determine whether you need to write ndr/sdr plugins to describe your renderers UsdShadeShader sources to USD or not.
To author materials in Houdini, most renderers have a one to one correspondence between their ndr/sdr discovered shader sources and VOP node types. Houdini ships with VOP nodes that correspond to the UsdPreviewSurface shaders nodes. The Material Library takes these VOP nodes and translates them into UsdShadeShader prims. Other renderers (renderman, redshift, arnold) also provide their own VOP node types which get translated into UsdShadeShader prims to describe a material as a network of connects primitives.
Karma is the odd one out here, in that it takes your whole VOP network, uses it to generate VEX code, then embeds that VEX code into a single UsdShadeShader primitive. This happens as part of a custom translation process. If you are using a principled shader, the translation process also authors a UsdPreviewSurface shader that attempts to approximately match the VEX/karma shader. So when you put down a principled shader and say that your renderer is looking at the resulting HdMaterialNetwork (which is the hydra translation of the UsdShadeShader network), it's not clear to me if your renderer is looking at the auto-generated UsdPreviewSurface network or the actual karma shader network (with just one shader node). I'm assuming you're looking at the USD Preview Surface network? Again, if this is hwo you want to describe your materials going forward, you shouldn't need an ndr/sdr plugin, or to write your own VOP nodes.
As for the animated parameters not working on shaders, make sure you have turned on the toggle in the Material Library LOP that allows animated parameters in the VOP network. Generally doing this is a bad idea for performance reasons, and instead you should author your materials once at frame one, expose any parameters you want to vary with time, then use an Edit Properties node after the material library to animate the promoted parameters. It will run much faster than re-generating the material on each frame.
To author materials in Houdini, most renderers have a one to one correspondence between their ndr/sdr discovered shader sources and VOP node types. Houdini ships with VOP nodes that correspond to the UsdPreviewSurface shaders nodes. The Material Library takes these VOP nodes and translates them into UsdShadeShader prims. Other renderers (renderman, redshift, arnold) also provide their own VOP node types which get translated into UsdShadeShader prims to describe a material as a network of connects primitives.
Karma is the odd one out here, in that it takes your whole VOP network, uses it to generate VEX code, then embeds that VEX code into a single UsdShadeShader primitive. This happens as part of a custom translation process. If you are using a principled shader, the translation process also authors a UsdPreviewSurface shader that attempts to approximately match the VEX/karma shader. So when you put down a principled shader and say that your renderer is looking at the resulting HdMaterialNetwork (which is the hydra translation of the UsdShadeShader network), it's not clear to me if your renderer is looking at the auto-generated UsdPreviewSurface network or the actual karma shader network (with just one shader node). I'm assuming you're looking at the USD Preview Surface network? Again, if this is hwo you want to describe your materials going forward, you shouldn't need an ndr/sdr plugin, or to write your own VOP nodes.
As for the animated parameters not working on shaders, make sure you have turned on the toggle in the Material Library LOP that allows animated parameters in the VOP network. Generally doing this is a bad idea for performance reasons, and instead you should author your materials once at frame one, expose any parameters you want to vary with time, then use an Edit Properties node after the material library to animate the promoted parameters. It will run much faster than re-generating the material on each frame.
- swfly
- Member
- 7 posts
- Joined: May 2020
- Offline
mtuckerHi thank you for explanation, it makes things much clearer to me.
USD is designed to allow a "Material" to encapsulate many renderer-specific "shaders". A shader consists of one or more connected UsdShade primitives. All UsdShade primitives have the same USD primitive type (UsdShadeShader), but each primitive also has a "source" which indicates the true nature of that specific primitive. UsdShadShader "sources" are discovered at runtime through ndr and sdr plugins. The USD spec includes a definition for USD Preview Surface shaders, and renderers are generally expected to understand and be able to interpret shader graphs authored using UsdShadeShaders prims of these "types". But most renderers also provide ndr and sdr plugins to describe their own UsdShadeShader sources. I don't know if your renderer has its own shader language or if you just want to use the UsdPreviewSurface specification. This decision will determine whether you need to write ndr/sdr plugins to describe your renderers UsdShadeShader sources to USD or not.
To author materials in Houdini, most renderers have a one to one correspondence between their ndr/sdr discovered shader sources and VOP node types. Houdini ships with VOP nodes that correspond to the UsdPreviewSurface shaders nodes. The Material Library takes these VOP nodes and translates them into UsdShadeShader prims. Other renderers (renderman, redshift, arnold) also provide their own VOP node types which get translated into UsdShadeShader prims to describe a material as a network of connects primitives.
Karma is the odd one out here, in that it takes your whole VOP network, uses it to generate VEX code, then embeds that VEX code into a single UsdShadeShader primitive. This happens as part of a custom translation process. If you are using a principled shader, the translation process also authors a UsdPreviewSurface shader that attempts to approximately match the VEX/karma shader. So when you put down a principled shader and say that your renderer is looking at the resulting HdMaterialNetwork (which is the hydra translation of the UsdShadeShader network), it's not clear to me if your renderer is looking at the auto-generated UsdPreviewSurface network or the actual karma shader network (with just one shader node). I'm assuming you're looking at the USD Preview Surface network? Again, if this is hwo you want to describe your materials going forward, you shouldn't need an ndr/sdr plugin, or to write your own VOP nodes.
As for the animated parameters not working on shaders, make sure you have turned on the toggle in the Material Library LOP that allows animated parameters in the VOP network. Generally doing this is a bad idea for performance reasons, and instead you should author your materials once at frame one, expose any parameters you want to vary with time, then use an Edit Properties node after the material library to animate the promoted parameters. It will run much faster than re-generating the material on each frame.
The renderer I am working on plans to support two materials: principled shader and renderer's own shading language one. Currently, for principled shader I am getting the networkmap from using sceneDelegate->GetMaterialResource(GetId()) and get the surface shader with HdMaterialTerminalTokens->surface token. I believe in this way I can only get the actual karma shader as the "identifier" will be the complete VEX code of it (or simply "opdef:/Vop/principledshader::2.0?SurfaceVexCode" if no node is connected), while I don't know how to get the UsdPreviewSurface network. I do really want to be able to, though.
For the custom shading language shader, the way I am creating it is probably odd too, as it uses all the VEX features of houdini (inline code, mathematic nodes, etc.) to generate VEX code, which will be authored in the materials' identifier slot. Then, the Hydra plugin will convert this VEX code to renderer's own one and use it for rendering. I did some tricks to make sure the variables (textures for example) can be correctly passed this way, however probably this is a very odd approach. After all, this approach helps me to avoid creating an ndr plugin (I still don't know how to create ndr plugin and plug into houdini and how it works), which is quite convenient imo, thanks to Houdini's VEX system.
I think my current main question becomes how to get the real network (with relationships, so that I can construct the whole graph in my Hydra plugin) instead of the one-node karma shader. My material code is quite similar from RPR's (https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRenderUSD/blob/master/pxr/imaging/plugin/hdRpr/material.cpp). I tried to read data from other renderer's graphs (redshift and renderman in particular), however no useful data can be fetched. It's probably due to my wrong code, though. Could you give some instruction on this? Thanks.
Also thanks for hint to animated parameters. The main purpose of this is to display flip-book textures on different surfaces (sometimes with loops) therefore it's necessary to be able to parse path with $F and arithmetic operators. I tried to toggle the parameter animation and can see it changes, but in Hydra I cannot access it properly for now. I need to take some more time to understand more about how a Hydra plugin should parse the materials.
- mtucker
- Staff
- 4525 posts
- Joined: July 2005
- Offline
swfly
I think my current main question becomes how to get the real network (with relationships, so that I can construct the whole graph in my Hydra plugin) instead of the one-node karma shader.
There is no graph to get. There is just the one karma UsdShadeShader prim.
Whether you get the UsdPreviewSurface network or the karma network in your render delegate depends, I think, on the value you return from HdRenderDelegate::GetMaterialNetworkSelector. To be getting the karma shader, I presume you must be returning "VEX" or "Karma" from that method (I can't recall what value we use for our material networks).
- swfly
- Member
- 7 posts
- Joined: May 2020
- Offline
mtuckerOOOHHH!!!!! That explains almost everything, including how ndr/sdr is used along with hydra plugin.swfly
I think my current main question becomes how to get the real network (with relationships, so that I can construct the whole graph in my Hydra plugin) instead of the one-node karma shader.
There is no graph to get. There is just the one karma UsdShadeShader prim.
Whether you get the UsdPreviewSurface network or the karma network in your render delegate depends, I think, on the value you return from HdRenderDelegate::GetMaterialNetworkSelector. To be getting the karma shader, I presume you must be returning "VEX" or "Karma" from that method (I can't recall what value we use for our material networks).
Thank you very much now I can do more with it!
- swfly
- Member
- 7 posts
- Joined: May 2020
- Offline
Hi sorry for mentioning this again. Recently we are migrating our hydra plugin to Houdini 18.5 (from 18.0). It was simple, however there exists one issue that our HdMaterial implementation stops receiving karma networks (essentially the VEX code node). Previously in 18.0, as our plugin returns "karma" in GetMaterialNetworkSelector(), the surface and displace maps in HdMaterialNetworkMap are correctly authorized. While in 18.5, there is nothing in this networkmap object. Is there anything I am missing here? Thank you.
- mtucker
- Staff
- 4525 posts
- Joined: July 2005
- Offline
- dasac
- Member
- 1 posts
- Joined: Sept. 2021
- Offline
mtuckerGreat! I've not seen that mode before. Good to me.
In Karma, the most obvious difference I see between 18.0 and 18.5 is the implementation of GetShaderSourceTypes. In 18.5, we do:TfTokenVector BRAY_HdDelegate::GetShaderSourceTypes() const { static TfTokenVector theSourceTypes({ TfToken("VEX", TfToken::Immortal) }); return theSourceTypes; }
-
- Quick Links