Redering tutorials

   66282   69   6
User Avatar
Member
210 posts
Joined: 1月 2014
Offline
Gyroscope
You might've over cranked something.
I hope so badly I did.

Gyroscope
Box wasn't faceted so that's why the corners are washed out.
….
Fixing the faceting problem actually reduced the render time to 40 seconds.
That's good to know

Gyroscope
It's optimizing stuff like this (and the double sided light, reflective properties, GI Light) which can lead to long render times, which is stuff I'm after. Looking forward to more guru level (circusmonkey, others) tips regarding this.
And that's right the tips and tricks I'm after too but knowing about these parameters, I still don't know what to do with them so if anybody can explain this in some detail.

Gyroscope
How long does it take your machine to calculate Final Gather? Generating Scene and calculating the GI Light only takes about a second in this scene in Mantra.
Generating scene: about 2 seconds
Total rendertime: 01:05 (faceted I'm down to 00:52)
Final Gather?

Gyroscope
If I were to critique your MR render… The sphere is floating, the roof looks like there is a small separation between the walls with the harsh black outline. And it's not really a 100% exact scene with Houdini as the light is not in the same position…
My inital intention wasn't to recreate the exact same, pixelperfect equivalent, I just slapped together a roughly matching scene because I don't think to illustrate the idea it's not important if the sphere is 0.045 lower in Y and the light 0.647 further back in Z, at least I think it wouldn't have changed rendertimes drasticly
User Avatar
Member
75 posts
Joined: 2月 2011
Offline
Korny Klown2
Gyroscope
How long does it take your machine to calculate Final Gather? Generating Scene and calculating the GI Light only takes about a second in this scene in Mantra.
Generating scene: about 2 seconds
Total rendertime: 01:05 (faceted I'm down to 00:52)
Final Gather?

Yes, even if Final Gather takes 5 seconds to process with Mental Ray that is time you have to include. I don't know if you're including those times.

Korny Klown2
Gyroscope
If I were to critique your MR render… The sphere is floating, the roof looks like there is a small separation between the walls with the harsh black outline. And it's not really a 100% exact scene with Houdini as the light is not in the same position…
My inital intention wasn't to recreate the exact same, pixelperfect equivalent, I just slapped together a roughly matching scene because I don't think to illustrate the idea it's not important if the sphere is 0.045 lower in Y and the light 0.647 further back in Z, at least I think it wouldn't have changed rendertimes drasticly

Thanks for omitting the most important part (light in view) for the sake of convenient argument. If your intention is not pixel perfect, or even remotely close your comparisons and conclusions become even more sketchy. Especially when you're getting into the minutia of second differences between the two methods.



Anyway to round out my participation here and back up my claim in my first post. My work machine: Dual Xeon E5-2680 @ 2.7ghz. 16 threads.

Houdini - Faceted scene:
- 1m8s with no GI light for flicker free animation
- 25s with GI Light

3DS Max - Vray 3.0 No Embree:
- 31s using Brute Force primary, Light Cache secondary for flicker free animation
- 11s using Irradiance Map for primary and Light Cache for secondary for still image.

Using Embree shaved off another couple seconds.

Attachments:
3dsMax_Vray_11s.jpg (37.1 KB)

User Avatar
Member
210 posts
Joined: 1月 2014
Offline
Gyroscope
Yes, even if Final Gather takes 5 seconds to process with Mental Ray that is time you have to include. I don't know if you're including those times.
Wait, I'm a little confused. Do you ask for the Mental Ray rendering or the Mantra. MR is 00:43 total (final gathering + beauty)

Gyroscope
Thanks for omitting the most important part (light in view) for the sake of convenient argument. If your intention is not pixel perfect, or even remotely close your comparisons and conclusions become even more sketchy. Especially when you're getting into the minutia of second differences between the two methods.
Ok, here is the scene with the light in view.
00:42

Attachments:
testsceneMaya.jpeg (97.6 KB)

User Avatar
スタッフ
2540 posts
Joined: 7月 2005
Offline
Gyroscope, the gitest_250_55s.hipnc has non-ideal settings for the given scene.

With your scene as is loaded and rendered I am getting on my older MacBook Pro laptop:
indirect photons: 2.6s
Mantra: 59.24s

and the following settings that I will focus on:
Pixel Samples: 7x7
Min Ray Samples: 1
Max Ray Samples: 9
Noise Level: 0.01

By tweaking the values above, I get:
indirect photons: 2.6s
Mantra: 46.1s
with no real difference in noise.

with these settings:
Pixel Samples: 3x3
Min Ray Samples: 1
Max Ray Samples: 32
Noise Level: 0.01


If you have a lot of high frequency detail on the geometry such as texture maps or displacement maps, you can increase the Pixel Samples to resolve that direct geometric and texture detail.

For mitigating noise in the render, you can use the min and max ray samples coupled with the Noise Level. In the case of this scene, there is very little real detail to resolve so increasing the Pixel Samples will force Mantra to fire an unnecessary amount of 7x7=49 primary rays and then for those areas that exceed the noise threshold, iterate over 49 ray bundles per iteration until the noise threshold is met.

You want to use the noise threshold and the Max Ray Samples to allow mantra to fire the rays where it is required to reduce the noise.

I didn't have to touch the Noise Level leaving it at 0.01 but by moving the rays from the Pixel Samples to the Max Ray Samples, you are now telling Mantra to fire only 9 rays for those areas that meet the noise threshold (that would be most of this image) and then start firing bundles of Pixel Samples up to the Max Ray Sample setting or until the Noise Threshold is met.

You can also add an additional image planes “direct ray samples” and “indirect ray samples” to see how many pixel sample bundles were actually fired to get a good idea as to how to tune the settings.

In the attached image, I ran the inspector over the direct_samples image plane and the grey areas are all 1 or only 3x3=9 samples were fired. The bright white areas show 32 ray samples or 9*32=288 rays were fired. All the other areas had the noise threshold met before the number of Max Ray Samples were hit.

Contrary to that, the bright white areas where the value is 32, Mantra bumped up to the Max Ray Samples at 32 before it could reach the noise threshold. Even though this is cleaner than the original image at 7x7 Pixel Samples, it took a fair bit less time to render, relatively speaking.

If you wanted to really clean up this image, you can either increase the tolerable noise threshold from 1% or 0.01 up to 1.5% or 0.015, or increase the Max Ray Samples until the direct_samples image plane no longer maxes out at the Max Ray Samples.

Hope this helps.

—-

Note that generating the extra image planes does add an overhead to the render times so it is just used to get an idea as to where the trouble areas are. If you don't require the additional render layers, then turn them off for the final renders.

—-
Have a look at this OdForce thread and have a look at Jason's default settings for real world settings:
http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=100825#100825 [sidefx.com]

Attachments:
mantra_direct_samples.jpg (132.1 KB)

There's at least one school like the old school!
User Avatar
スタッフ
2540 posts
Joined: 7月 2005
Offline
Now for my opinion on using variations of the Cornell Box as a real world render test. I wonder what the Cornell team would think of this thread (hint: probably not much at all).

First of all the Cornell Box “test” is to allow you to compare how accurately a Render engine can match up with a real scene with real objects and real lighting:

http://www.graphics.cornell.edu/online/box/ [graphics.cornell.edu]

Very valid 5+ years ago. These days? Arguably not as critical although still a worthy test if you want to test physical accuracy of the BSDF models more so and the light transport of a render engine.

By just creating your own box and comparing render engines in a subjective matter is just that, subjective. It has no real purpose other than to argue the “subjective quality” or appeal of the various image variations. If you were to create an identical cg scene to match the actual cornell box with real surface properties, then compare the rendered image to the actual images, then that I buy.

Today, you can get very very close to the actual Cornell Box with Mantra, Arnold, VRay, RenderMan and MentalRay and many other amazing render engines with proper surface shaders and lighting.

If you want to see how fast you can render the cornell box by adding caching schemes and acceleration methods, which by the way are all lossy and approximations, and you can get close, fantastic. But they are lossy and prone to artefacts in certain situations. You are manufacturing light information by using various inspection techniques to augment a render engine's own internal light transport mechanisms of which Final Gathering is one such approximation technique.

With 64bit hardware and operating systems along with healthy amounts of memory and very fast processors and threaded render engines, you can get decent render times by brute force raytracing such as Arnold, Mantra, latest RenderMan and other physically based render engines. VRay has a lot of various shema as well and is also right up there quality wise. The render landscape is changing rapidly all in a positive way. With Houdini you have Mantra. Fantastic amount of choice.

As for the render tests above from Korny, I hate to say but compared to the real cornell box, it isn't even close and don't argue that as it is objective. Just the hard pencil like lines on the corners are not physically plausible unless you took the equivalent of a marker and painted that on the wall intersections on the real cornell box, of which the profs would not be very happy as you defaced a quite famous artefact. There are other questionable artefacts as well.

Again it is all subjective and if you meet your client's quality requirements, fantastic! Everyone's happy, you get paid and move on. This is the real argument and there isn't an argument to be had imho. If everyone is happy with what they are getting, great. Just remember that the goal posts are indeed moving in the render world and clients, even those that aren't as sophisticated will notice and will eventually request quality improvements.



These days if you are comparing render engines, you need to present a variety of different scenarios involving indoor and outdoor lighting. Objects with lots of fine detail. Instancing vs. everything unique, etc.

In film it is not unheard of to have upwards of 1000 to 100k props in your scene. Some props may have as many as 10-40+ 8k to 12k texture maps being referenced by the shaders, with and without displacements with DOF and Motion Blur all in camera. That new normal is also being hit in high end commercial shots more often these days.

Another slowly emerging trend is the necessity to render everything in a single pass including large amounts of geometry, volumetric data and more. Light bleeding in to volumes which in turn scatter light on to surrounding geometry with a physically based render engine in the hands of a good lighter are phenomenal. More and more approvals are happening in Lighting but it is still early days. Eventually it will come to pass that the majority of approvals will be in the Lighting phase.

Interesting days ahead for sure. Although not as controversial, it reminds me of the off-line vs on-line wars of the late 1990's post houses where now you have: render passes or all-together, Comp up to beauty vs beauty minus adjustments, and everything in-between.

Stay flexible and learn as many render engines as you can, including Mantra.

And don't forget to pad the budget a bit to pay for the next bit of kit to take advantage of the various renderers and their vastly improving architectures and capabilities. Take that for what you will.
There's at least one school like the old school!
User Avatar
Member
75 posts
Joined: 2月 2011
Offline
Awesome posts! Thanks so much jeff, helps a whole lot. And I agree with your views on these tests and the current state of rendering fully as I expressed these concerns earlier as well. Not as eloquent though.

Your settings got me to 40s on my home machine. Admittedly (obviously) I'm still learning the Mantra way of optimizing, so your break down is quite valuable. I have read those threads (and others) at one point but thanks again for the reminder. The information never stuck though (it will this time). I find retaining information becomes harder when you're not actively in a place where it's immediately important. /too_much_to_learn_and_getting_old

Edit: Added expansions for jeffs great followup.
User Avatar
Member
4189 posts
Joined: 6月 2012
Offline
jeff
Very valid 5+ years ago. These days? .

True - it's all dragons, bunnies, and these meme days, surely a cat model too
User Avatar
Member
84 posts
Joined: 5月 2012
Offline
Thanks a lot Jeff for that set of tips (or should I say basic rendering knowledge..), and to everyone here helping.

I came to this thread for the name, I expected a compendium of rendering tutorials to learn from, after some posts I just became interested in the increasing tensions with the “mantra sucks!” kind of comments… admittedly i enjoy from them time to time… I love to read the polite guru comebacks. But then it started to get interesting with actual mantra talk… and then I got to the last page and Jeff just dump a ton of info onto my brain… and it all felt like a very well spent time.

That why I love this place
User Avatar
Member
4189 posts
Joined: 6月 2012
Offline
increasing tensions with the “mantra sucks!” kind of comments…

Like sands through the hour glass… tune in for regular monthly broadcasts of the Mantra soap opera.
User Avatar
Member
35 posts
Joined: 1月 2016
Offline
robonilla
Thanks a lot Jeff for that set of tips (or should I say basic rendering knowledge..), and to everyone here helping.

I came to this thread for the name, I expected a compendium of rendering tutorials to learn from, after some posts I just became interested in the increasing tensions with the “mantra sucks!” kind of comments… admittedly i enjoy from them time to time… I love to read the polite guru comebacks. But then it started to get interesting with actual mantra talk… and then I got to the last page and Jeff just dump a ton of info onto my brain… and it all felt like a very well spent time.

That why I love this place

Same here The last page was like a reward.

I was so lost for several months (I still am), trying to understand how I should do things in Houdini. I can understand the frustration as well, the feeling of learning everything from scratch again (after spending countless hours of my life learning Max, Blender and some Maya), with extremely limited time. But we just keep trying, right? After watching almost all GoProcedural tutorials, digging through documentation I can say one thing: the amount of help we can get is getting better and better. I still see that there are “gaps”, the tutors sometimes assume too much, or e.g. in official videos, something is shown, explained fast, but with some key things omitted. An example file is often the last thing that saves the day, but is not always included. Also things become outdated fast(VOPSOPS vs Attribute VOPs hit me hard) which adds to the confusion.

My latest problem was one detail in PBR shader - it's a great addition for more general users like me, who just cannot afford enough time to build custom shaders from scratch every time (and still have a ton to learn in every aspect of the software) and I just could'n find the “checkbox” for using opacity in textures. Well, there is none! I was amazed that this artist-friendly shader lacks this basic function. The solution I know is to Allow the Editing of the Shader (but then it stops being a Digital Asset) and connect the Texture to Opacity inside. But if you save now you change the default Digital Asset, which is not “clean”. I tried to save it as a 2.0 version but it still replaced the default one.

Just one example of simple things that keep getting in the way all the time. But Houdini is amazing and I won't stop trying to learn it's ways. I also love how SideFx is handling things (tutorials, staying in touch/listening to users, the recent Webinar).

Good luck everybody
Edited by Lukas Stolarski - 2016年5月26日 12:23:13
---
http://madfusion.eu [madfusion.eu]
https://linktr.ee/loogas [linktr.ee]
  • Quick Links