ragupasta
ragupasta
About Me
Connect
LOCATION
Not Specified
ウェブサイト
Houdini Engine
Availability
Not Specified
Recent Forum Posts
alembc dop 2017年9月5日13:50
When you set Deadpool to an RBD did you turn on “Deforming Geometry” in the RBD node in the dopnet?
Back On The Shading Bandwagon 2017年9月4日16:53
hi chaps, so I've been looking at shading a little closer lately and I really want to push the Principled Shader and see what it can achieve with very limited modification from the vanilla shader.
So I am starting with a snow scene.
I found a very nice image of an ice Utah Teapot frozen in snow on google and seemed like a nice place to start. So here is my rendition of a similar scene.
I'm concentrating on the ice shader first, the the snow secondly.
I had some advice quite a long time ago from tamte in regards to internal depth of ice (not sure if it was here or odforce), but since I have some time once again, I am returning to the fray.
Simple scene:
1: Displaced grid for floor. MountainSOP for main and shader displacements for secondary and tertiary.
2. Teapot is supposed to be shaded as a clear ice object with cloudy internal ice as found in most types of ice. Displacements are shader based and not geometry based.
The internal cloudy ice is not a mix of different shaders. It is a copy of the teapot. The size is reduced and turned into a volume via the isoffset node. An uniform volume shader has been applied to this, in conjunction with a volumevop node to create some noise in the density values.
The lighting is very simple.
1x Area light for the main lighting with the area sizes scaled up for soft shadows.
1x Environment light with the Bosch HDR that ships with Houdini, with a slight blue colour to tint the light a bit from grey.
1x spotlight. This one is only to accentuate the internal volume. The light mask and shadow mask is set to the volume only.
Any suggestions are welcome. I should have an update in the next day or so.
So I am starting with a snow scene.
I found a very nice image of an ice Utah Teapot frozen in snow on google and seemed like a nice place to start. So here is my rendition of a similar scene.
I'm concentrating on the ice shader first, the the snow secondly.
I had some advice quite a long time ago from tamte in regards to internal depth of ice (not sure if it was here or odforce), but since I have some time once again, I am returning to the fray.
Simple scene:
1: Displaced grid for floor. MountainSOP for main and shader displacements for secondary and tertiary.
2. Teapot is supposed to be shaded as a clear ice object with cloudy internal ice as found in most types of ice. Displacements are shader based and not geometry based.
The internal cloudy ice is not a mix of different shaders. It is a copy of the teapot. The size is reduced and turned into a volume via the isoffset node. An uniform volume shader has been applied to this, in conjunction with a volumevop node to create some noise in the density values.
The lighting is very simple.
1x Area light for the main lighting with the area sizes scaled up for soft shadows.
1x Environment light with the Bosch HDR that ships with Houdini, with a slight blue colour to tint the light a bit from grey.
1x spotlight. This one is only to accentuate the internal volume. The light mask and shadow mask is set to the volume only.
Any suggestions are welcome. I should have an update in the next day or so.
WIP - Personal RnD on Houdini 2017年9月4日16:23
Hi Matteo.
Interesting thread. A lot of good stuff in here.
Just taking you back to your Sci-Fi corridor scene. I'm looking at your render settings and I'm wondering why you have your pixel samples to 8x8.
I try my best to keep the pixel sampling down as low as I can. Yes it makes a nice clean image, but if you look at the problems individually you can make a nice clean render with a lot lower rendertimes than using pixel sampling.
So Houdini 16 has a lot of individual sampling sliders. Please use these!
For example: If you are using global illumination you always get noise in the shadow area's as you do with any other renderer. You can clear a lot of this with the Pixel Samples. However a much better way is using the “Diffuse Quality” parameter.
For subsurface scattering, try and use the SSS Quality parameter before pixel sampling.
The trouble with Pixel Sampling is that if you use any other form of sampling (for example: Reflection Quality), the pixel samples are multiplied by each sample value set in Mantra. That can be a lot of samples.
A better way is to use extra image planes.
Mantra –> Images –> extra Image planes. Choose the ones you want to inspect and render in MPlay or render view (render view for me). Each of those image planes with be saved into the buffer where you can choose from the rendered view (C), (direct samples), (indirect samples), ect, ect. You can view these individually so you can trouble shoot area's, and see what areas need additional sampling, and which don't.
With this knowledge you can improve areas of the render without resorting to huge rendertimes.
Keep up the good work, some nice stuff going on so far.
Interesting thread. A lot of good stuff in here.
Just taking you back to your Sci-Fi corridor scene. I'm looking at your render settings and I'm wondering why you have your pixel samples to 8x8.
I try my best to keep the pixel sampling down as low as I can. Yes it makes a nice clean image, but if you look at the problems individually you can make a nice clean render with a lot lower rendertimes than using pixel sampling.
So Houdini 16 has a lot of individual sampling sliders. Please use these!
For example: If you are using global illumination you always get noise in the shadow area's as you do with any other renderer. You can clear a lot of this with the Pixel Samples. However a much better way is using the “Diffuse Quality” parameter.
For subsurface scattering, try and use the SSS Quality parameter before pixel sampling.
The trouble with Pixel Sampling is that if you use any other form of sampling (for example: Reflection Quality), the pixel samples are multiplied by each sample value set in Mantra. That can be a lot of samples.
A better way is to use extra image planes.
Mantra –> Images –> extra Image planes. Choose the ones you want to inspect and render in MPlay or render view (render view for me). Each of those image planes with be saved into the buffer where you can choose from the rendered view (C), (direct samples), (indirect samples), ect, ect. You can view these individually so you can trouble shoot area's, and see what areas need additional sampling, and which don't.
With this knowledge you can improve areas of the render without resorting to huge rendertimes.
Keep up the good work, some nice stuff going on so far.