Thoughts on mantra
41970 75 15- SreckoM
- Member
- 379 posts
- Joined: Dec. 2006
- Online
- galagast
- Member
- 252 posts
- Joined:
- Offline
@Olaf, Thanks! It indeed greatly reduced the noise compared to my last render
Just to confirm, when you say Quality of the GI Light, you mean the Global Quality in the Mantra ROP's sampling (using Ray Variance)?
I cannot seem to find a Quality parameter from the GI Light object, except for the Indirect Diffuse Tab's Sampling Quality. But the tooltip says that it is ignored by the PBR.
@SreckoM, cool I'll play around adding Area Lights then, Thanks!
Attached image changes:
- Diffuse Limit: 3 -> 1
- Mantra Global Quality: 1 -> 10
- GI Light Photon Count: 10,000 -> 1,000,000
* Increase in render time.
* Still has a bit of noise.
Just to confirm, when you say Quality of the GI Light, you mean the Global Quality in the Mantra ROP's sampling (using Ray Variance)?
I cannot seem to find a Quality parameter from the GI Light object, except for the Indirect Diffuse Tab's Sampling Quality. But the tooltip says that it is ignored by the PBR.
@SreckoM, cool I'll play around adding Area Lights then, Thanks!
Attached image changes:
- Diffuse Limit: 3 -> 1
- Mantra Global Quality: 1 -> 10
- GI Light Photon Count: 10,000 -> 1,000,000
* Increase in render time.
* Still has a bit of noise.
Edited by galagast - Oct. 20, 2017 12:23:02
- galagast
- Member
- 252 posts
- Joined:
- Offline
I revisited @cpb's area light setup to try and recreate the above GI Lit image.
Learned quite alot in faking GI!
Current settings:
- Diffuse Limit: 0
- GI Light: Disabled
- Sun + Env + Area Lights
* Renders much faster.
* Significantly much lesser noise.
* Noise is throttled by the Area Light's Sampling Quality.
H16.0.745 Indie
Learned quite alot in faking GI!
Current settings:
- Diffuse Limit: 0
- GI Light: Disabled
- Sun + Env + Area Lights
* Renders much faster.
* Significantly much lesser noise.
* Noise is throttled by the Area Light's Sampling Quality.
H16.0.745 Indie
Edited by galagast - Oct. 20, 2017 12:23:39
- SreckoM
- Member
- 379 posts
- Joined: Dec. 2006
- Online
I do not think that you can sell image without GI nowdays on archviz market, even you do it perfectly. It is not point to get fast, noiseless render, it needs to look good too. Mantra is not built for archviz, especially not for freelancer or smaller studios, its strength lives in different realm.
One thing I would like to see in Mantra is straightforward tonemaping option, like Filmic or Reinhard RGB, and option of using 3dl or cube LUT formats.
Attached image is tonemaped with Filmic.
One thing I would like to see in Mantra is straightforward tonemaping option, like Filmic or Reinhard RGB, and option of using 3dl or cube LUT formats.
Attached image is tonemaped with Filmic.
Edited by SreckoM - Oct. 20, 2017 17:43:51
- anon_user_40689665
- Member
- 648 posts
- Joined: July 2005
- Offline
textured update
Can't get the whole textured scene under the forum 15mb limit… grab textures from here:
https://download.corona-renderer.com/sample_scene.zip [download.corona-renderer.com]
save textures to $HIP/map
save attached bgeo.sc to $HIP/geo
Attached pic is about 2min 30sec @ HD on the Threadripper 1950x (including COPs CC). Textures add more noise, had to up the pixel samples a notch, also rendered full HD.
Tempted to rebuild the scene Houdini style… but dealing with cruddy source is our lot in life.
Faking GI can buy you a lot of time to go nuts on what the viewer will actually notice… tho granted the viewer will notice a lot more with stills than with animation.
Would be good to apply .cube files, but I think CC can be done just as well with COP VOPs. Will have a look at un-clamped ramps tomorrow.
Can't get the whole textured scene under the forum 15mb limit… grab textures from here:
https://download.corona-renderer.com/sample_scene.zip [download.corona-renderer.com]
save textures to $HIP/map
save attached bgeo.sc to $HIP/geo
Attached pic is about 2min 30sec @ HD on the Threadripper 1950x (including COPs CC). Textures add more noise, had to up the pixel samples a notch, also rendered full HD.
Tempted to rebuild the scene Houdini style… but dealing with cruddy source is our lot in life.
Faking GI can buy you a lot of time to go nuts on what the viewer will actually notice… tho granted the viewer will notice a lot more with stills than with animation.
Would be good to apply .cube files, but I think CC can be done just as well with COP VOPs. Will have a look at un-clamped ramps tomorrow.
- anon_user_40689665
- Member
- 648 posts
- Joined: July 2005
- Offline
- lewis_T
- Member
- 250 posts
- Joined: March 2013
- Offline
I have a question about your VRay pyro render. Did you change the mode from “approximate and shadows” to raytrace?
Because unless it's in “raytrace” with GI enabled, VRay isn't actually ray tracing at all. It's doing an absolute
hack, which is unusable in production. That pyro example, which I will have a look at, is not production worthy.
And that's the point. I've just spent a year bashing my head against the wall trying to get the lighting dept to
be able to render volumes in Vray that look as good as Mantra, and also, aren't taking forever. I'm talking actual
production comparisons. That's not meant to be insulting, it's just that the pyro you're using as an example is not
what we face in production. Try very transparent dust, swirling fast with deformation blur, and add a camera flying
through the dust (which means transform blur too!), and add millions of particles as grit being used as hold outs to
give the dust a specific look. That was un-renderable in Vray. I'm talking massive memory usage, and a long time to render. Mantra ate it for breakfast and still had time to eat an apple.
I'm not shitting on Vray and defending Mantra, all engines have room for improvement. But it's the test scenes you
are putting forth that totally skew the results. Pyro would never be rendered without motion blur in production.
Cheers
Because unless it's in “raytrace” with GI enabled, VRay isn't actually ray tracing at all. It's doing an absolute
hack, which is unusable in production. That pyro example, which I will have a look at, is not production worthy.
And that's the point. I've just spent a year bashing my head against the wall trying to get the lighting dept to
be able to render volumes in Vray that look as good as Mantra, and also, aren't taking forever. I'm talking actual
production comparisons. That's not meant to be insulting, it's just that the pyro you're using as an example is not
what we face in production. Try very transparent dust, swirling fast with deformation blur, and add a camera flying
through the dust (which means transform blur too!), and add millions of particles as grit being used as hold outs to
give the dust a specific look. That was un-renderable in Vray. I'm talking massive memory usage, and a long time to render. Mantra ate it for breakfast and still had time to eat an apple.
I'm not shitting on Vray and defending Mantra, all engines have room for improvement. But it's the test scenes you
are putting forth that totally skew the results. Pyro would never be rendered without motion blur in production.
Cheers
I'm not lying, I'm writing fiction with my mouth.
- racoonart
- Member
- 43 posts
- Joined: Feb. 2017
- Offline
This time I didn't try to disable all the approximations and caches in vray to match Mantra because that just wouldn't make sense. They are here to make rendering faster and as long as it looks as good as without I don't think you should disable them - it's all about the result anyways. I first did the Houdini sim and render and then tried to match it in Vray. The defaults in Vray are way lower and render in a few seconds. I increased several settings to something I would call “usable for production”.Nope, I did not, as mentioned in the pyro post quoted above. I haven't had any issues so far with it but as I said, I probably have a different perspective on that matter (and definitely different productions). Please contribute a scene which is not renderable in vray. This whole story goes both ways, if vray is not able to render something I can report a problem in their forum and maybe we'll get improvements.
- lewis_T
- Member
- 250 posts
- Joined: March 2013
- Offline
Okay, well that's kinda the point. Vray at default doesn't raytrace through the volumes, so in the scenario I mentioned
it was absolutely unable to deal with it. That's not Vray bashing, just a fact. I can't supply you with a feature film
scene of the complexity I'm talking about, it would take me weeks to re-create all the elements at home.
No need to report it to the forum, I talk to the devs directly.
Not disabling, and therefor switching to actually ratracing the volume is a “cheat”, as it's simply shading based
on brightness, not actually marching through the volume. That's why it renders in seconds. I can whip up a highly
transparent, very tendril-like smoke sim that you will see looks like soft mush in Vray when you don't switch to
raytrace mode. It's unusable. So switching to raytrace will yield very close results to Mantra, nice and crisp, looks
great. But it's 3-4 times slower. Those un-renderable scenes I was talking about were using over 100gb of ram in Vray,
didn't look as good, took 6hrs a frame to render too.
I'll look at your supplied pyro smoke scene, and I will make a scene with the smoke mentioned above, and some Vray
renders. Once again, Vray at defaults is heavily biased to cheat the volume rendering. But on very dense examples like
the one you have, it can look pretty close in way less time. It's when you have way more transparency, deformation blur, and multiple scattering bounces that the default method Vray is set to, will render it like soft mush.
I'm not anti-Vray or a Mantra disciple. I have used both for about 10 years, and use both heavily at work every day.
So I see both sides of the coin, in a variety of scenarios. Hands down in most of the work we do, Mantra, though quite possibly slower per frame in a few situations, wins out due to flexibility. And not just flexible in the traditional
sense, but flexible in the how the hell are we going to fix/deal with this fuck up?
Cheers
it was absolutely unable to deal with it. That's not Vray bashing, just a fact. I can't supply you with a feature film
scene of the complexity I'm talking about, it would take me weeks to re-create all the elements at home.
No need to report it to the forum, I talk to the devs directly.
Not disabling, and therefor switching to actually ratracing the volume is a “cheat”, as it's simply shading based
on brightness, not actually marching through the volume. That's why it renders in seconds. I can whip up a highly
transparent, very tendril-like smoke sim that you will see looks like soft mush in Vray when you don't switch to
raytrace mode. It's unusable. So switching to raytrace will yield very close results to Mantra, nice and crisp, looks
great. But it's 3-4 times slower. Those un-renderable scenes I was talking about were using over 100gb of ram in Vray,
didn't look as good, took 6hrs a frame to render too.
I'll look at your supplied pyro smoke scene, and I will make a scene with the smoke mentioned above, and some Vray
renders. Once again, Vray at defaults is heavily biased to cheat the volume rendering. But on very dense examples like
the one you have, it can look pretty close in way less time. It's when you have way more transparency, deformation blur, and multiple scattering bounces that the default method Vray is set to, will render it like soft mush.
I'm not anti-Vray or a Mantra disciple. I have used both for about 10 years, and use both heavily at work every day.
So I see both sides of the coin, in a variety of scenarios. Hands down in most of the work we do, Mantra, though quite possibly slower per frame in a few situations, wins out due to flexibility. And not just flexible in the traditional
sense, but flexible in the how the hell are we going to fix/deal with this fuck up?
Cheers
I'm not lying, I'm writing fiction with my mouth.
- Olaf Finkbeiner
- Member
- 323 posts
- Joined: Jan. 2015
- Offline
- stefan05e81cea6f0b4809
- Member
- 8 posts
- Joined: July 2016
- Offline
This thread has been a great read, and I am thankful it has recovered so nicely after almost going south on page 2.
I'm pretty much in the same boat as racoonart on this. While I do own a Redshift license (it's that single-machine renderfarm under the table I could not afford otherwise) I also know that there are a couple of things it cannot render with it that Mantra can.
From my point of view, the reason why some people hope for improvement in this regard is that we have seen SESI investing more into making Houdini accessible to a wider audience with emphasize on getting smaller studios and freelancers into the boat over the last couple of years (esp. since Softimage was officially discontinued), and while I don't know the sales numbers, I would attest that these endevours were successful, at least measured by the communities seemingly increased awareness of Houdini in general.
While this development seems to have affected the majority of departments in Houdini, the one that seems to have been left behind is ease of use of the renderer, i.e. getting it to render fast and noise free on the majority of scenes (leaving aside those special case that only Mantra can render due to it's deep integration, and maybe specialities like pyro volume rendering with motion blur and GI and what not).
I agree that in many cases this can be fixed by adjusting the scene, changing lighting, optimizing shaders etc (Photon maps? Really?), but all this costs human labor time, which is the most expensive of all, and I dare to insinuate that it is this circumstance that makes people hope and wish for improvements in this regard as well. At least thats what it is for me.
As for Embree: I remember this has been a long-standing topic over at the Chaos Group forum. While early tests looked promising (some 20-30% speed increase), it took them well more than a year to make the decision to implement it in production builds, mostly because it meant to give up some of their own core functionality in favor of another that they had little control over. So while it may be faster out of the box, it also implies increased risk of inducing third party technology into their code. Also, the question is not only about what underlying technology to use (Embree, or raytracing on the GPU, or whatever), but also how to use this technology. Things like importance sampling, bidirectional path tracing, etc have a huge influence on render time, not just how many rays you can trace per second.
I am a long-time reader of the Arnold user forum and keep a close eye on its change lists, and it was not so long ago that Arnold was (and still is in comparison to Redshift), slow at rendering scenes that are mostly illuminated by indirect lighting. In fact all renderers have that problem, and that is not simple to solve, no matter whether you implement Embree or even rewrite the whole renderer to run on the GPU, which comes with it's own set of problems, as illustrated by a Podcast by Vlado from Chaos Group (unfortunately the link is dead: http://nvidia.fullviewmedia.com/gtc2014/S4779.html [nvidia.fullviewmedia.com], does anyone have a working one?) where he goes on over the issue they had to getting Vray to run on the GPU in such a way that it was fast enough to make it worth while. The main problem, if I recall correctly, was that code would execute slower the more complex it was, so they had to restructure it massively into smaller chunks to run faster. Panos of Rredshift once wrote as a comment to this Podcast:
"If anyone needed proof of that, all they had to do is look at the multitude of controls in the materials/lights/objects/lighting techniques. Pretty much every single competing solution is missing the vast majority of them. It’s not that these guys cannot figure basic things like per light shaders or per-object light lists or proper visibility flags or deformation blur - the problem is that collectively, all these little features add up to a gigantic amount of GPU code. And, unless you have an architecture that can ‘take it’, supporting all of these can slow down your GPU execution considerably.We decided that people care about all these little things so had to figure out a way to support them without destroying performance!“
All that being said and written: +1 for rendering speed improvements, independent of how they are achieved :-)
Also I totally second what racoonart wrote about previewing an ongoing render to Disk:
”Rendering to disk AND being able to watch the progress in mplay would be most welcome."
I figured out the hard way during the last couple of days that this is indeed currently not possible! Thats quite a bummer and highly unexpected, speed penalty or not. Maybe adding an option to only send rgba to mplay and not all AOVs could speed things up? Either way, not seeing if renders come out all right, or only after the first image, which might take hours, is highly discomforting. I hope this gets addressed at some point.
Stefan
I'm pretty much in the same boat as racoonart on this. While I do own a Redshift license (it's that single-machine renderfarm under the table I could not afford otherwise) I also know that there are a couple of things it cannot render with it that Mantra can.
From my point of view, the reason why some people hope for improvement in this regard is that we have seen SESI investing more into making Houdini accessible to a wider audience with emphasize on getting smaller studios and freelancers into the boat over the last couple of years (esp. since Softimage was officially discontinued), and while I don't know the sales numbers, I would attest that these endevours were successful, at least measured by the communities seemingly increased awareness of Houdini in general.
While this development seems to have affected the majority of departments in Houdini, the one that seems to have been left behind is ease of use of the renderer, i.e. getting it to render fast and noise free on the majority of scenes (leaving aside those special case that only Mantra can render due to it's deep integration, and maybe specialities like pyro volume rendering with motion blur and GI and what not).
I agree that in many cases this can be fixed by adjusting the scene, changing lighting, optimizing shaders etc (Photon maps? Really?), but all this costs human labor time, which is the most expensive of all, and I dare to insinuate that it is this circumstance that makes people hope and wish for improvements in this regard as well. At least thats what it is for me.
As for Embree: I remember this has been a long-standing topic over at the Chaos Group forum. While early tests looked promising (some 20-30% speed increase), it took them well more than a year to make the decision to implement it in production builds, mostly because it meant to give up some of their own core functionality in favor of another that they had little control over. So while it may be faster out of the box, it also implies increased risk of inducing third party technology into their code. Also, the question is not only about what underlying technology to use (Embree, or raytracing on the GPU, or whatever), but also how to use this technology. Things like importance sampling, bidirectional path tracing, etc have a huge influence on render time, not just how many rays you can trace per second.
I am a long-time reader of the Arnold user forum and keep a close eye on its change lists, and it was not so long ago that Arnold was (and still is in comparison to Redshift), slow at rendering scenes that are mostly illuminated by indirect lighting. In fact all renderers have that problem, and that is not simple to solve, no matter whether you implement Embree or even rewrite the whole renderer to run on the GPU, which comes with it's own set of problems, as illustrated by a Podcast by Vlado from Chaos Group (unfortunately the link is dead: http://nvidia.fullviewmedia.com/gtc2014/S4779.html [nvidia.fullviewmedia.com], does anyone have a working one?) where he goes on over the issue they had to getting Vray to run on the GPU in such a way that it was fast enough to make it worth while. The main problem, if I recall correctly, was that code would execute slower the more complex it was, so they had to restructure it massively into smaller chunks to run faster. Panos of Rredshift once wrote as a comment to this Podcast:
"If anyone needed proof of that, all they had to do is look at the multitude of controls in the materials/lights/objects/lighting techniques. Pretty much every single competing solution is missing the vast majority of them. It’s not that these guys cannot figure basic things like per light shaders or per-object light lists or proper visibility flags or deformation blur - the problem is that collectively, all these little features add up to a gigantic amount of GPU code. And, unless you have an architecture that can ‘take it’, supporting all of these can slow down your GPU execution considerably.We decided that people care about all these little things so had to figure out a way to support them without destroying performance!“
All that being said and written: +1 for rendering speed improvements, independent of how they are achieved :-)
Also I totally second what racoonart wrote about previewing an ongoing render to Disk:
”Rendering to disk AND being able to watch the progress in mplay would be most welcome."
I figured out the hard way during the last couple of days that this is indeed currently not possible! Thats quite a bummer and highly unexpected, speed penalty or not. Maybe adding an option to only send rgba to mplay and not all AOVs could speed things up? Either way, not seeing if renders come out all right, or only after the first image, which might take hours, is highly discomforting. I hope this gets addressed at some point.
Stefan
Edited by stefan05e81cea6f0b4809 - Nov. 9, 2017 06:19:08
- galagast
- Member
- 252 posts
- Joined:
- Offline
Just an addendum to Stefan's post regarding rendering to disk+mplay:
I also previously requested for this feature.
In H16.5 they added something close, but only for flipbooks.
I already forgotten about my post here from last year, so I updated it a day ago (and sent an RFE!)
https://www.sidefx.com/forum/topic/45545/ [www.sidefx.com]
On the topic. There really are quite a lot of stuff that I think is uniquely done only in Houdini's Mantra. It would be interesting to maybe have a book or an online resource that is mainly focused on Mantra.
The one from the docs is a great start: http://www.sidefx.com/docs/houdini/render/index.html [www.sidefx.com]
Most notable is the Mantra User Guide.. the images included really did tell a lot about the settings.
What I might be interested to see next is maybe an update to this docs portion: http://www.sidefx.com/docs/houdini/render/quality [www.sidefx.com]
Add more image examples/comparisons with render times
To sum it short, I wish there's a Mantra version of the Vray Complete [www.francescolegrenzi.com] book. Given how fast (or slow) rendering technologies change, I don't mind buying a physical guide book
I also previously requested for this feature.
In H16.5 they added something close, but only for flipbooks.
I already forgotten about my post here from last year, so I updated it a day ago (and sent an RFE!)
https://www.sidefx.com/forum/topic/45545/ [www.sidefx.com]
On the topic. There really are quite a lot of stuff that I think is uniquely done only in Houdini's Mantra. It would be interesting to maybe have a book or an online resource that is mainly focused on Mantra.
The one from the docs is a great start: http://www.sidefx.com/docs/houdini/render/index.html [www.sidefx.com]
Most notable is the Mantra User Guide.. the images included really did tell a lot about the settings.
What I might be interested to see next is maybe an update to this docs portion: http://www.sidefx.com/docs/houdini/render/quality [www.sidefx.com]
Add more image examples/comparisons with render times
To sum it short, I wish there's a Mantra version of the Vray Complete [www.francescolegrenzi.com] book. Given how fast (or slow) rendering technologies change, I don't mind buying a physical guide book
- BradThompson
- Member
- 67 posts
- Joined: March 2017
- Offline
Since we're talking about ways to speed up Mantra:
I'm a relatively new Houdini user, acting as a bit of a guinea pig for a small team that's trying to shift away from Autodesk products. One thing that would simplify and speed up the rendering pipeline for groups like mine, would be to remove the need to pre-generate IFD's before running a job on our render farm. As far as I can tell, this is just a licensing restriction.
I've managed to figure out how to use packed primitives to make IFD generation really quick, but it wasn't easy to learn, it frequently slows down my creation process, and it's another thing to manage. Most other integrated renderers seem to be able to create their version of an IFD, per-frame, on-the-fly on each render node.
I like having the option of IFD pre-generation, but it feels like an unnecessary impediment when I want to quickly throw a test in the render queue. I feel like there must be a better way to separate what you would need a Houdini Engine license for, and what you would need a mantra license for. Just my $.02.
I'm a relatively new Houdini user, acting as a bit of a guinea pig for a small team that's trying to shift away from Autodesk products. One thing that would simplify and speed up the rendering pipeline for groups like mine, would be to remove the need to pre-generate IFD's before running a job on our render farm. As far as I can tell, this is just a licensing restriction.
I've managed to figure out how to use packed primitives to make IFD generation really quick, but it wasn't easy to learn, it frequently slows down my creation process, and it's another thing to manage. Most other integrated renderers seem to be able to create their version of an IFD, per-frame, on-the-fly on each render node.
I like having the option of IFD pre-generation, but it feels like an unnecessary impediment when I want to quickly throw a test in the render queue. I feel like there must be a better way to separate what you would need a Houdini Engine license for, and what you would need a mantra license for. Just my $.02.
- blackpixel
- Member
- 182 posts
- Joined: April 2009
- Online
BradThompson
Most other integrated renderers seem to be able to create their version of an IFD, per-frame, on-the-fly on each render node.
I like having the option of IFD pre-generation, but it feels like an unnecessary impediment when I want to quickly throw a test in the render queue. I feel like there must be a better way to separate what you would need a Houdini Engine license for, and what you would need a mantra license for. Just my $.02.
Yes, but all of the “other integrated renderers” require a seperate license to render or eat a host app license. Mantra tokens are free.
This makes Houdini / Mantra very affordable considering you could either quickly create IFDs from your workstation or have a handfull of Engine lincenses do the work for you. Houdini Engine can also be used for simulations and batch tasks that are not related to FX work, so they are in no way a wasted investment.
- BradThompson
- Member
- 67 posts
- Joined: March 2017
- Offline
blackpixel
Yes, but all of the “other integrated renderers” require a seperate license to render or eat a host app license. Mantra tokens are free.
I'm coming from a 3DS Max background where up until last year, both the integrated renderers (Scanline and MentalRay), and FinalRender included 999 network render licenses for no additional charge. The removal of free network rendering is one of the primary reasons we are trying to move away from Autodesk and toward Houdini.
I'm not suggesting making Houdini Engine licenses free, only the IFD generation and probably some of the geo prep stuff so that I don't have to mess about with remembering to pack/cache every little thing, or troubleshoot why IFD generation is taking so long. Let the farm do that so that I can get back to iterating. I acknowledge that I'm new to Houdini, but so far, I don't see the benefit of having to manually manage that process most of the time.
- symek
- Member
- 1390 posts
- Joined: July 2005
- Offline
blackpixelBradThompson
Most other integrated renderers seem to be able to create their version of an IFD, per-frame, on-the-fly on each render node.
I like having the option of IFD pre-generation, but it feels like an unnecessary impediment when I want to quickly throw a test in the render queue. I feel like there must be a better way to separate what you would need a Houdini Engine license for, and what you would need a mantra license for. Just my $.02.
Yes, but all of the “other integrated renderers” require a seperate license to render or eat a host app license. Mantra tokens are free.
This makes Houdini / Mantra very affordable considering you could either quickly create IFDs from your workstation or have a handfull of Engine lincenses do the work for you. Houdini Engine can also be used for simulations and batch tasks that are not related to FX work, so they are in no way a wasted investment.
BratThompson has a point here though. I don't mind using IFD files, I love them, they give us great deal of flexibility, but to take full advantage of them, you need a lot of scripting - building entire pipeline around them. They are completely redundant practically wise for novice users. SESI used to have hbatch license suitable for running only rendering, also big companies having site license of hbatch don't bother with IFD files afaik.
Last but not least, managing IFD files puts a lot of pressure for IT department, which, again, makes use of Houdini/Mantra rather harder than easier.
From an accountant stand point, Maya comes with free batch licenses which can be utilized on farm by third party renders. Assuming moderate farm size, it means that having a couple of seats for artists, you've already provided yourself with Maya batches. It's very comfortable for studios without strong IT support.
As I understand this is pure political decision of SESI to bind Mantra cost to hbatch (Mantra is not free in a way, as in practice you always need hbatches, but the ratio makes this price super cheap). It works for us more than great, but causes headache for novice users.
It's worth letting SESI to know about it. It seems they are all about pleasing newcomers these days
- anon_user_40689665
- Member
- 648 posts
- Joined: July 2005
- Offline
- CiaranM
- Member
- 31 posts
- Joined: June 2009
- Offline
- symek
- Member
- 1390 posts
- Joined: July 2005
- Offline
cpb
I strongly believe hbatch should be free for compositing & ifdgen, however it would deprive Sidefx of a source of income.
I definitely didn't mean that. Your pricing from a rendering perspective is already very aggressive, and we appreciate that! Quite oppositely and against my own interest, I wouldn't mind little higher prices if that could cause throwing more development cycles on lighting and rendering. That - without increasing pricing - even better.
What I understand from above comments is that licensing policy influences usability of the software for new users. I won't pretend I know the answer. Maybe something like, being able to chose between equally priced and termed (yearly rental): either N hbatches or N*x rbatches.
- jpparkeramnh
- Member
- 178 posts
- Joined: Jan. 2013
- Offline
-
- Quick Links