Wireframe Shader inside Karma MaterialX Subnet?

   6041   14   7
User Avatar
Member
11 posts
Joined: June 2020
Offline
hello all! I'm fairly new to Houdini, and am learning to create a LookDev HDA Tool. I would like the user to have the option of rendering with wireframe only, hidden line invisible, shaded only, etc. The options are pretty much there in the list of scene display options, but I want them to only affect the asset geometry and not the sphere balls/colour chart. I'm not really keen on using polywire because I would imagine it to be very heavy on objects with dense topology. Assuming that I would have to build shaders for this, could anyone please offer any guidance on this inside a Karma MaterialX Subnet? I'm also open to have some coding involved if that's possible and more efficient. Thank you very much in advance.
Edited by chienhuey - Nov. 10, 2022 22:43:00

Attachments:
lookdev_question.jpg (808.8 KB)

User Avatar
Member
8041 posts
Joined: Sept. 2011
Offline
There's no way to do a wireframe shader *purely in materialX properly. The classic wireframe mantra shader uses derivatives to compute line widths and it doesn't always look very good, especially at glancing angles. It's possible to make a simple wireframe shader with materialX but without the shading derivatives, the widths will be percentages of the face size and vary by viewing angle.

The best way by far is to use precomputed distances per the nvidia paper, but as there are no geometry shaders in karma, it must be done in sops. Using sops to calculate vertex attributes that store screen space distance information allows a simple shader to make very high quality wireframes. This paper can help get you started
https://developer.download.nvidia.com/whitepapers/2007/SDK10/SolidWireframe.pdf [developer.download.nvidia.com]

Another option is to use the Opengl ROP pointed to the lop network pruned down to just the asset.
Edited by jsmack - Nov. 10, 2022 23:39:56
User Avatar
Member
8041 posts
Joined: Sept. 2011
Offline
I implemented the nvidia paper in an example hip file as well as the other way which doesn't look right without derivatives.

Attachments:
simple_wireframe_shader_mtlx.hip (724.8 KB)
simple_wireframe_shader_mtlx.karmarendersettings.jpg (633.0 KB)

User Avatar
Member
6 posts
Joined: Jan. 2022
Offline
jsmack
Another option is to use the Opengl ROP pointed to the lop network pruned down to just the asset.

Hi,
This topic is very timely.
Thank you but how do you do that please? I have tried but without success. I only get a null image.


I took a look at your other solution offered to chienhuey but I don't think I will use it because I don't understand much and for everything I do, I avoid copy/paste. I want to understand everything I do.

Let me ask your advice on another subject please.
How can I import the version of each of my objects in stage before applying the final subdivide and still keep my original highpoly object placement in my scene please?

Low poly version of one of my object

What I want


I attached my hip file if you'd care to look at it please.

Thank you

Attachments:
wireframeRender.hipnc (2.6 MB)

User Avatar
Member
8041 posts
Joined: Sept. 2011
Offline
JeremyE
Let me ask your advice on another subject please.
How can I import the version of each of my objects in stage before applying the final subdivide and still keep my original highpoly object placement in my scene please?

You could use variants to create an LOD variant, but if all you're doing is subdividing, then just use a subdivision surface. On the SOP import enable subdivision, or with scene import, enable mantra subdivision of the object being imported. Alternatively, if the mesh has already been imported, right click the mesh prim in the scene graph and edit the mesh prim. This will create a node allowing toggling the subdivision mode of the mesh.

JeremyE
Hi,
This topic is very timely.
Thank you but how do you do that please? I have tried but without success. I only get a null image.

I haven't looked at the scene file, but perhaps the ROP is not configured to use the stage.
User Avatar
Member
6 posts
Joined: Jan. 2022
Offline
Indeed, all the changes is only a subdivide node.
I'm trying to find a way to do what you said and I'll come back to you. Thank you.
User Avatar
Member
6 posts
Joined: Jan. 2022
Offline
jsmack
I haven't looked at the scene file, but perhaps the ROP is not configured to use the stage.

That was it.

Thanks

Lets found how to do the subdividing thing in stage you said to me now.
User Avatar
Member
574 posts
Joined: Aug. 2014
Offline
That's a perfect timing, @jsmack! Thanks a lot!

I've been trying to figure out how to render a nice wireframe in Karma for some time now. My last attempt was with mtlxsplittband mtlxsplitlr, but the thickness of wires generated by these operators turned out to rely on polygon dimensions, so the result was, well... far from acceptable to say the least. Additionally, rendering it with XPU was introducing some artifacts in the form of overly thickened parts of the wireframe.
Edited by ajz3d - Nov. 11, 2022 16:32:58

Attachments:
mtlx_tb_lr.jpg (121.2 KB)

User Avatar
Member
6 posts
Joined: Jan. 2022
Offline
jsmack
You could use variants to create an LOD variant, but if all you're doing is subdividing, then just use a subdivision surface.

I found a way. Yay!
I imported my lowpoly objects then within a Sop Modify node, I applied subdivisions after unpacking USD to polygons.

And now the wireframe rendering doesn't work anymore. I don't know what I changed to make it go wrong. :S
Some help please?

Edit:
Hi,
After a good night's sleep and 15 minutes of troubleshooting, I finally found the culprit.

You should not put /stage/ for the camera path.
Edited by JeremyE - Nov. 12, 2022 06:04:03

Attachments:
LOPSubdivide.hipnc (2.7 MB)

User Avatar
Member
36 posts
Joined: Feb. 2016
Offline
How about using Barycentric Coordinates (MtlX Term ray:hituv) and dividing by the distance to Cam (MtlX Term ray:hitdist) and then comparing against a thickness value?
Here's a setup where I am not 100% on the math, as I had to use some divide math where I don't think they're really needed. Nevertheless a shader only solution
Edited by Yader - Nov. 14, 2022 04:26:57

Attachments:
mplay_2E9HmauS75.jpg (381.0 KB)
houdini_JUNFfilf7z.png (304.5 KB)
Wireframe_Shader_MtlX_01.hiplc (955.9 KB)

https://behance.net/derya [behance.net]
User Avatar
Member
8041 posts
Joined: Sept. 2011
Offline
Yader
How about using Barycentric Coordinates (MtlX Term ray:hituv) and dividing by the distance to Cam (MtlX Term ray:hitdist) and then comparing against a thickness value?
Here's a setup where I am not 100% on the math, as I had to use some divide math where I don't think they're really needed. Nevertheless a shader only solution

That's a cool solution, but the barycentric coordinates are normalized, so faces of different sizes will have different widths.
User Avatar
Member
66 posts
Joined: May 2019
Offline
There's a much much simpler way 🙂 Basically, it's the same process as creating handpainted textures in Photoshop.

1. Save UVs as image (two options, same result):
- Labs Export UV Wireframe node [www.sidefx.com] -> Choose resolution and path -> render. Has an additional option to define wireframe width AND can save UDIMs.
- Right-click on UV node you have (UV Flatten, AutoUV, UV Layout, etc, or whatever node after UVs are created) -> Save -> Texture UV to Image [www.sidefx.com]. option -> Choose resolution and save path.

2. Open in Photoshop or what you prefer:
- If using Texture UV to Image option -> add a white (or black, sometimes wireframe renders with white instead of black color) layer underneath UV image.
- If using Labs Export UV -> add a white layer underneath UV image.
- Save the image.

3. Create MTLX Standart Surface material
- create MTLX image node.
- load wireframe image.
- plug into MTLX material base color.

4. Done.

Works with Karma CPU & XPU. Or actually with any renderer.


Edited by AnimGraphLab - March 29, 2024 04:28:11

Attachments:
wireframe-karma.png (4.7 MB)

Generalist. Transforming still images to 3D animation 🔮
Socials: https://linktr.ee/AnimGraphLab [linktr.ee]
User Avatar
Member
1 posts
Joined: April 2021
Offline
AnimGraphLab
There's a much much simpler way 🙂 Basically, it's the same process as creating handpainted textures in Photoshop.

1. Save UVs as image (two options, same result):
- Labs Export UV Wireframe node [www.sidefx.com] -> Choose resolution and path -> render. Has an additional option to define wireframe width AND can save UDIMs.
- Right-click on UV node you have (UV Flatten, AutoUV, UV Layout, etc, or whatever node after UVs are created) -> Save -> Texture UV to Image [www.sidefx.com]. option -> Choose resolution and save path.

2. Open in Photoshop or what you prefer:
- If using Texture UV to Image option -> add a white (or black, sometimes wireframe renders with white instead of black color) layer underneath UV image.
- If using Labs Export UV -> add a white layer underneath UV image.
- Save the image.

3. Create MTLX Standart Surface material
- create MTLX image node.
- load wireframe image.
- plug into MTLX material base color.

4. Done.

Works with Karma CPU & XPU. Or actually with any renderer.


Image Not Found


There's a more simple way.
1. Get ray:hituv
2. Put it into mtlx grid tex coord input and set thickness
3. Remap with colors you wish
4. Enjoy

Attachments:
image_2024-04-21_13-39-19.png (1.9 MB)

User Avatar
Member
9 posts
Joined: Jan. 2023
Offline
Hi

I tested most of the propositions but as explain by Jsmack most of the solution doesn't work as the faces of different sizes will have different widths/thickness.

I'm still looking for a solution to get a Wireframe that's working with as AiWireframe with arnold maya

If someone got another idea for my needs. It will be really appreciate
Edited by Tukifri - July 10, 2024 09:55:32

Attachments:
wireframe.jpg (162.7 KB)

User Avatar
Member
9 posts
Joined: Jan. 2023
Offline
Up
  • Quick Links