New Axis SSS
48076 29 3- Serg
- Member
- 511 posts
- Joined:
- Offline
Hi there,
Attached is an example hip and otl's for my latest SSS solution. Have a look over at the oDforce forum's very large SSS thread for some bg info.
It's basically an implementation of nvidia's diffusion and color profiles as used on their human head project. I have no idea wether my shader actually tracks their data correctly, but it look alright
The head to the left uses a point cloud variant of the shader (handy if you are rendering full screen views of gory internal organs), the one to the right is per-pixel.
The thing in the middle is there to show how you can handle internal occluding objects, as well as internal glowing things, the front light is excluded from this object to show the effect more clearly. Bear in mind the effect is not correct in that shadows are being cast along the surface normal direction rather than towards the Light (i.e. if you move the back light around the shadow from the houdini twirl will remain static), I may do an update in the future that does this correctly.
The render took 2m16s on a i7 965 (23s if if rendering only the Pcloud'ed head) . If by chance you find it renders like 30 times slower than this on your machine, post your houdini version here, I found some render time oddities that I will have to contact support about.
Things to be aware of:
If you want to use area Lights you will need to set it's samples to 1 or 2 (the shader will add more)… so if your light has 2 samples and the shader is set to 16 the actuall number of shadow samples taken will be 2*16=32… that means you may need duplicate light rigs for dealing with sss shaded stuff and regular stuff at the same time, via light masks. I dont know how or wether it's even possible to override shadow sampling from the shader.
The point clouds used in the render bellow are actually the mesh itself. Although you can use the scatter Sop if you wish, I personally like using a subdivided mesh because it's inherently stable (as long as topology is unchanging) and often creates just the right point density in the right places.
The Shader will not appear in reflections!! I'll try and figure this out sometime.
There will probably be more updates in the future as I use this more and more. At the mo it's as free of artefacts as I know how to make it
The vop otls are subnets, so feel free to poke around and point out my mistakes/inefficiencies ops:
Cheers
Sergio
Attached is an example hip and otl's for my latest SSS solution. Have a look over at the oDforce forum's very large SSS thread for some bg info.
It's basically an implementation of nvidia's diffusion and color profiles as used on their human head project. I have no idea wether my shader actually tracks their data correctly, but it look alright
The head to the left uses a point cloud variant of the shader (handy if you are rendering full screen views of gory internal organs), the one to the right is per-pixel.
The thing in the middle is there to show how you can handle internal occluding objects, as well as internal glowing things, the front light is excluded from this object to show the effect more clearly. Bear in mind the effect is not correct in that shadows are being cast along the surface normal direction rather than towards the Light (i.e. if you move the back light around the shadow from the houdini twirl will remain static), I may do an update in the future that does this correctly.
The render took 2m16s on a i7 965 (23s if if rendering only the Pcloud'ed head) . If by chance you find it renders like 30 times slower than this on your machine, post your houdini version here, I found some render time oddities that I will have to contact support about.
Things to be aware of:
If you want to use area Lights you will need to set it's samples to 1 or 2 (the shader will add more)… so if your light has 2 samples and the shader is set to 16 the actuall number of shadow samples taken will be 2*16=32… that means you may need duplicate light rigs for dealing with sss shaded stuff and regular stuff at the same time, via light masks. I dont know how or wether it's even possible to override shadow sampling from the shader.
The point clouds used in the render bellow are actually the mesh itself. Although you can use the scatter Sop if you wish, I personally like using a subdivided mesh because it's inherently stable (as long as topology is unchanging) and often creates just the right point density in the right places.
The Shader will not appear in reflections!! I'll try and figure this out sometime.
There will probably be more updates in the future as I use this more and more. At the mo it's as free of artefacts as I know how to make it
The vop otls are subnets, so feel free to poke around and point out my mistakes/inefficiencies ops:
Cheers
Sergio
- eetu
- Member
- 606 posts
- Joined: May 2007
- Offline
- sanostol
- Member
- 577 posts
- Joined: Nov. 2005
- Offline
- peliosis
- Member
- 175 posts
- Joined: July 2005
- Offline
- eetu
- Member
- 606 posts
- Joined: May 2007
- Offline
Hi,
that's quite a vopnet you've knitted together!
That's because you're checking for recursion with the “Ray Bounce Level” VOP. If the incoming ray originates from a reflection/refraction, the bounce level is already one higher, and the shader aborts prematurely.
Here's a version that tries to fix it with ray labeling. If the incoming ray is tagged with a label originating from the same shader, the shader aborts.
Changelog: added “send:” label exports to the vgather loops and an inline rayimport() test for the Gather-Samples loop.
Bad news is that it's a couple of times slower, perhaps I'm doing something very wrong. I'm moderately sure that it actually kills the recursion, but the rayimport seems to mess up the threading somehow. maybe.
Anyhoo, not usable, but perhaps of inspiration on how to fix it
There might be a smarter way to deduce the ray source too, of course..
eetu.
that's quite a vopnet you've knitted together!
The Shader will not appear in reflections!! I'll try and figure this out sometime.
That's because you're checking for recursion with the “Ray Bounce Level” VOP. If the incoming ray originates from a reflection/refraction, the bounce level is already one higher, and the shader aborts prematurely.
Here's a version that tries to fix it with ray labeling. If the incoming ray is tagged with a label originating from the same shader, the shader aborts.
Changelog: added “send:” label exports to the vgather loops and an inline rayimport() test for the Gather-Samples loop.
Bad news is that it's a couple of times slower, perhaps I'm doing something very wrong. I'm moderately sure that it actually kills the recursion, but the rayimport seems to mess up the threading somehow. maybe.
Anyhoo, not usable, but perhaps of inspiration on how to fix it
There might be a smarter way to deduce the ray source too, of course..
eetu.
- Serg
- Member
- 511 posts
- Joined:
- Offline
Thanks eetu (I was a big fan of your plugins from the lw age btw
I will check your fix out. I didn't know another way to prevent the number of rays from exploding without that ray level stuff.
Also there is another quite big issue (usually very concave surfaces) that occurs when the source of the rays is blocked by an unintended surface.
I figure I need some way to declare a surface transparent to the gather loop if normal of the first hit surface is the inverse of what it should be, but i've no idea how to actually do it, can a change to Of be forced from the gather loop only?…
Maybe something along the lines of what you have done with ray labelling?
cheers
S
I will check your fix out. I didn't know another way to prevent the number of rays from exploding without that ray level stuff.
Also there is another quite big issue (usually very concave surfaces) that occurs when the source of the rays is blocked by an unintended surface.
I figure I need some way to declare a surface transparent to the gather loop if normal of the first hit surface is the inverse of what it should be, but i've no idea how to actually do it, can a change to Of be forced from the gather loop only?…
Maybe something along the lines of what you have done with ray labelling?
cheers
S
- rutra
- Member
- 8 posts
- Joined: Feb. 2009
- Offline
Beautiful shader! Doesn't have ugly artifacts like many other shaders give when it come to deep scattering on complex geometry.
But unfortunately I'm unable to get Pcloud SSS version to work with VEX Global Illumination light. Is it due to the same problem as with reflections you've mentioned?
Non pcloud version seems to work ok.
But unfortunately I'm unable to get Pcloud SSS version to work with VEX Global Illumination light. Is it due to the same problem as with reflections you've mentioned?
Non pcloud version seems to work ok.
- Serg
- Member
- 511 posts
- Joined:
- Offline
Hi there.
There is a new version of this shader.
change list:
- implemented eetu's fix regarding stopping ray recursion without stopping the shader appearing in reflections, I still limit it from appearing in a second bounce, since here we always setup expensive things this way. You can edit the asset and change it for your needs.
- separated outputs for control in post
- much better behavior when dealing with very convex geometry (tests both sides of the surface for obstructions and picks the best option, i.e. if both sides obstruct the given scatter radius it will pick the side with most room and prevent the cone from intersecting the obstruction. I plan to implement a feature that changes the cone angle to compensate for the cone having to be closer to the surface (closer to the surface means a smaller scatter radius) due to the obstruction .
- lots of general cleanup
- Epidermis Thickness control. Pushes the “blood” away or towards the surface.
- included are two shaders designed to workaround the single sided limitation of the orennayer shading model and a double sided occlusion shader.
- back scattering per light… I plan to make this properly some time, it's more realistic that with the option off but the render time will be affected if you have more than one light.
You can always turn back scattering off completely (set back intensity to 0) and add houdini's single sss to the mix… unfortunately single sss doesn't support shadows though (sesi can you fix this plz)
Usage:
-This is generally intended to be used for relatively small scatter radius's, the bigger the radius the more likely you are to encounter a problem and more samples are needed.
-The vop node as a few non obvious inputs:
Alarm = If you have other shaders that use the gather vop and is using the same method (which feels wrong btw) to kill recursions, you will need to plug it's alarm in here also. I've no idea why apparently completely independent Gather Loops would interfere with each other, but it seems they do step on each others toes unless you explicitly tell all of them to ignore each other, hence the input.
E = This is where you plug any part of your shader that you intend to be emissive.
C = plug your surface color textures here, it will be used if “Scatter Surface Color” is On
D = Plug the “Axis Prep Oren for SSS” node in here, or a “Lambert” shading model that has “Ensure Faces Point Forward” switched Off
AO = If you want ambient occlusion to be scattered plug the “Axis Prep OCC for SSS” here. At Axis we normally use occlusion multiplied by a simple lookup to a pre-blurred environment HDRI map. If you don't want this, you can still use standard houdini environment light for ambient light, which will be picked up by the lighting model anyway and therefore scattered (it wont come out in the ambient output, so you'll need a separate Take for this like you would normally).
Sampling:
-Area Lights (including Environment Lights) need their samples to be quite low, (usually 4 or something like that) because the shader samples light multiple times anyway… so you may need light rig duplicates and light masks to render non-sss shaders at the same time.
Likewise for Occlusion shader (set it to 2 samples).
-The Oversampling control is provided as a means to increase the samples for the most blurred layers. Samples at 12 and Oversampling at 3, means a minimum of 12 samples for the shallowest layer and 36 samples for the deepest (most blurred) layer.
- Experiment with low Samples vs high over sampling and vice-versa. Also try rendering in raytrace mode, it may result to a noise free result quicker than thhe micropolygon renderer. The default 12*3 is tuned to a noise free result in Raytrace mode with 6*6 pixel samples. The minimum samples is 6 because that's how many steps are in the diffusion profiles.
The result is absolutely meant to be viewed at 2.2 gamma!!
cheers
Sergio
There is a new version of this shader.
change list:
- implemented eetu's fix regarding stopping ray recursion without stopping the shader appearing in reflections, I still limit it from appearing in a second bounce, since here we always setup expensive things this way. You can edit the asset and change it for your needs.
- separated outputs for control in post
- much better behavior when dealing with very convex geometry (tests both sides of the surface for obstructions and picks the best option, i.e. if both sides obstruct the given scatter radius it will pick the side with most room and prevent the cone from intersecting the obstruction. I plan to implement a feature that changes the cone angle to compensate for the cone having to be closer to the surface (closer to the surface means a smaller scatter radius) due to the obstruction .
- lots of general cleanup
- Epidermis Thickness control. Pushes the “blood” away or towards the surface.
- included are two shaders designed to workaround the single sided limitation of the orennayer shading model and a double sided occlusion shader.
- back scattering per light… I plan to make this properly some time, it's more realistic that with the option off but the render time will be affected if you have more than one light.
You can always turn back scattering off completely (set back intensity to 0) and add houdini's single sss to the mix… unfortunately single sss doesn't support shadows though (sesi can you fix this plz)
Usage:
-This is generally intended to be used for relatively small scatter radius's, the bigger the radius the more likely you are to encounter a problem and more samples are needed.
-The vop node as a few non obvious inputs:
Alarm = If you have other shaders that use the gather vop and is using the same method (which feels wrong btw) to kill recursions, you will need to plug it's alarm in here also. I've no idea why apparently completely independent Gather Loops would interfere with each other, but it seems they do step on each others toes unless you explicitly tell all of them to ignore each other, hence the input.
E = This is where you plug any part of your shader that you intend to be emissive.
C = plug your surface color textures here, it will be used if “Scatter Surface Color” is On
D = Plug the “Axis Prep Oren for SSS” node in here, or a “Lambert” shading model that has “Ensure Faces Point Forward” switched Off
AO = If you want ambient occlusion to be scattered plug the “Axis Prep OCC for SSS” here. At Axis we normally use occlusion multiplied by a simple lookup to a pre-blurred environment HDRI map. If you don't want this, you can still use standard houdini environment light for ambient light, which will be picked up by the lighting model anyway and therefore scattered (it wont come out in the ambient output, so you'll need a separate Take for this like you would normally).
Sampling:
-Area Lights (including Environment Lights) need their samples to be quite low, (usually 4 or something like that) because the shader samples light multiple times anyway… so you may need light rig duplicates and light masks to render non-sss shaders at the same time.
Likewise for Occlusion shader (set it to 2 samples).
-The Oversampling control is provided as a means to increase the samples for the most blurred layers. Samples at 12 and Oversampling at 3, means a minimum of 12 samples for the shallowest layer and 36 samples for the deepest (most blurred) layer.
- Experiment with low Samples vs high over sampling and vice-versa. Also try rendering in raytrace mode, it may result to a noise free result quicker than thhe micropolygon renderer. The default 12*3 is tuned to a noise free result in Raytrace mode with 6*6 pixel samples. The minimum samples is 6 because that's how many steps are in the diffusion profiles.
The result is absolutely meant to be viewed at 2.2 gamma!!
cheers
Sergio
- Serg
- Member
- 511 posts
- Joined:
- Offline
some test renders….
using eetu's shader object that can be found here:
http://forums.odforce.net/index.php?showtopic=7616&st=0 [forums.odforce.net]
As you can see there is a bit of an issue on very sharp corners (darker edge), I know why this is happening, just haven't yet clocked how to fix it. Probably not going to be an issue with organic models though.
S
using eetu's shader object that can be found here:
http://forums.odforce.net/index.php?showtopic=7616&st=0 [forums.odforce.net]
As you can see there is a bit of an issue on very sharp corners (darker edge), I know why this is happening, just haven't yet clocked how to fix it. Probably not going to be an issue with organic models though.
S
- Serg
- Member
- 511 posts
- Joined:
- Offline
- Serg
- Member
- 511 posts
- Joined:
- Offline
- protozoan
- Member
- 1717 posts
- Joined: March 2009
- Offline
- Serg
- Member
- 511 posts
- Joined:
- Offline
That last one was 11m33s, on an i7 965.
shader samples were 12*2
Raytrace renderer with Pixel Samples 6*6
Environment light with hdri was 4 samples.
The Environment light for the ground object has 10 samples.
Normally I'd render the frames with “some” noise in it, say 5 mins… then do noise redux filter in comp to kill the noise.
I'm still working on the point cloud version, at the moment I'm trying to work out why it renders slower than the per-pixel shader :shock:
S
shader samples were 12*2
Raytrace renderer with Pixel Samples 6*6
Environment light with hdri was 4 samples.
The Environment light for the ground object has 10 samples.
Normally I'd render the frames with “some” noise in it, say 5 mins… then do noise redux filter in comp to kill the noise.
I'm still working on the point cloud version, at the moment I'm trying to work out why it renders slower than the per-pixel shader :shock:
S
- protozoan
- Member
- 1717 posts
- Joined: March 2009
- Offline
- Serg
- Member
- 511 posts
- Joined:
- Offline
The shader samples the lighting multiple times itself, so any effect that needs multiple samples, like area/env lights or occlusion is almost free, since you can set those sample so low to begin with.
In other words… rendering a point light will take the same time as a 1 sample area light, except that 1 sample area light actually turns out noise free because the shader samples it multiple times.
Also, you could optimize things quite a bit by baking the lighting to textures and plug that into the D input instead of a lighting model.
S
In other words… rendering a point light will take the same time as a 1 sample area light, except that 1 sample area light actually turns out noise free because the shader samples it multiple times.
Also, you could optimize things quite a bit by baking the lighting to textures and plug that into the D input instead of a lighting model.
S
- edward
- Member
- 7899 posts
- Joined: July 2005
- Offline
- Serg
- Member
- 511 posts
- Joined:
- Offline
Generally we use the noise reduction filter in Fusion, like most filters of this kind it's darned crap, but can be coerced into being almost decent when used per pass and all sorts of masking.
I'm waiting to test the AE Neat Video plugin from absoft in Fusion… If its anywhere near as effective as Neat Image it will be amazing!
More testing is needed but I'm HIGHLY impressed with it so far… It makes anything else I've tried look pathetic, including REeNoise.
btw, I remember LW had a noise reduction filter that would apply post render, it was a simple filter but could at least use the zbuffer to stop bleeding effects. Not great results but was and still is a good idea…
Maybe sesi could do an even better one by taking advantage of all that extra info we have about the geometry of the scene, or just license the Neat tech.
S
I'm waiting to test the AE Neat Video plugin from absoft in Fusion… If its anywhere near as effective as Neat Image it will be amazing!
More testing is needed but I'm HIGHLY impressed with it so far… It makes anything else I've tried look pathetic, including REeNoise.
btw, I remember LW had a noise reduction filter that would apply post render, it was a simple filter but could at least use the zbuffer to stop bleeding effects. Not great results but was and still is a good idea…
Maybe sesi could do an even better one by taking advantage of all that extra info we have about the geometry of the scene, or just license the Neat tech.
S
- theflu
- Member
- 6 posts
- Joined: May 2009
- Offline
- fbonniwell
- Member
- 52 posts
- Joined: March 2009
- Offline
- Serg
- Member
- 511 posts
- Joined:
- Offline
Hi There,
New version attached.
To cut a long story short…
- this version has the fat trimmed out.
- It's faster, sometimes twice as fast.
- it uses the Pathtracer mechanism (see example scene)… Given two identical shaders, one with gather loop (in code) and the other with pathtracer, the pathtracer is a fair bit faster, for reasons I cant explain.
the otl has two hda's in it. AXIS Pathtrace SSS and AXIS Pathtrace. The second is just a modified Path Trace node. It has the angle parameter.
any questions, shout
cheers
Sergio
New version attached.
To cut a long story short…
- this version has the fat trimmed out.
- It's faster, sometimes twice as fast.
- it uses the Pathtracer mechanism (see example scene)… Given two identical shaders, one with gather loop (in code) and the other with pathtracer, the pathtracer is a fair bit faster, for reasons I cant explain.
the otl has two hda's in it. AXIS Pathtrace SSS and AXIS Pathtrace. The second is just a modified Path Trace node. It has the angle parameter.
any questions, shout
cheers
Sergio
-
- Quick Links