Hi,
I'm rendering a project and I've got a few computers available to render with. I've searched high and low for help on vmantra command line settings (or for that matter any knowledge of the existence of vmantra) in the help files. How can I network render with vmantra in the same way as I can with mantra -H host1 host2… hostx etc?
Vmantra multihost rendering
4526 6 0- djpeanut
- Member
- 9 posts
- Joined: 7月 2005
- Offline
- edward
- Member
- 7899 posts
- Joined: 7月 2005
- Offline
- old_school
- スタッフ
- 2540 posts
- Joined: 7月 2005
- Offline
As Ed said, mantra is mantra.
Yes, you can still set up remote hosts to distribute buckets for the current frame to take advantage of dual cpu workstations and to also render on remote hosts. Mantra -H still works the same.
If this is linux, then it tends to be more reliable. If it is windows, search both the Side Effects forums and OdForce forums to see what is involved to set this up.
I do have to mention that in almost all cases, it is better to distribute ifd's and not buckets. This means that you need a proper piece of software to manage the distribution of files. It always amazes me that most schools do not employ some sort of renderfarm solution whereas almost all production facilities do. Hmmmmmm.
Here are two render queue solutions that impress me for openness and completeness:
GridEngine (GridWare) (http://gridengine.sunsource.net/) [gridengine.sunsource.net] is free, runs on Linux and did I mention it is free? There is also a grid rop as well as good support files from some users.
Rush (http://seriss.com/rush-current/rush/index.html) [seriss.com] is not free but also not expensive, is as open as GridEngine and offers support for more platforms.
There are many other solutions as well.
Yes, you can still set up remote hosts to distribute buckets for the current frame to take advantage of dual cpu workstations and to also render on remote hosts. Mantra -H still works the same.
If this is linux, then it tends to be more reliable. If it is windows, search both the Side Effects forums and OdForce forums to see what is involved to set this up.
I do have to mention that in almost all cases, it is better to distribute ifd's and not buckets. This means that you need a proper piece of software to manage the distribution of files. It always amazes me that most schools do not employ some sort of renderfarm solution whereas almost all production facilities do. Hmmmmmm.
Here are two render queue solutions that impress me for openness and completeness:
GridEngine (GridWare) (http://gridengine.sunsource.net/) [gridengine.sunsource.net] is free, runs on Linux and did I mention it is free? There is also a grid rop as well as good support files from some users.
Rush (http://seriss.com/rush-current/rush/index.html) [seriss.com] is not free but also not expensive, is as open as GridEngine and offers support for more platforms.
There are many other solutions as well.
There's at least one school like the old school!
- deecue
- Member
- 412 posts
- Joined: 7月 2005
- Offline
It always amazes me that most schools do not employ some sort of renderfarm solution whereas almost all production facilities do. Hmmmmmm.
actually, the savannah college of art and design implemented a renderfarm about 1.5 - 2 years ago.. all the systems in the building (except for osx boxes) were set to dual boot windows/redhat(now fedora).. all of them had a boot cycle in between classes as well as an idle time reboot which killed windows and rebooted fedora after 30 min or so.. i don't know exactly what they had setup or how they did it, but it was all user controlled through the intranet where you could submit your project, cancel it, etc.. they had maya up and running first.. then houdini a couple months later..
figuring there were a crapload of computers in that buildings, things went pretty fast..
Dave Quirus
- old_school
- スタッフ
- 2540 posts
- Joined: 7月 2005
- Offline
Yes SCAD is one of the very few facilities that does have a renderfarm. It is using GridEngine. They use the gridengine ROP (proto-install) plus other wrapper scripts modified from the basic tools that John Coldrick wrote.
That is why they have to boot in to Linux as GridEngine only works on most Unix based OS'es.
That is why they have to boot in to Linux as GridEngine only works on most Unix based OS'es.
There's at least one school like the old school!
- xiondebra
- Member
- 543 posts
- Joined: 7月 2005
- Offline
jeff
They use the gridengine ROP (proto-install) plus other wrapper scripts modified from the basic tools that John Coldrick wrote.
Hi Jeff,
Grid engine ROP?
I don't see it in the $HFS/houdini/dso_proto dir in either H7 or H8 …
–Mark
========================================================
You are no age between space
You are no age between space
- JColdrick
- Member
- 4140 posts
- Joined: 7月 2005
- Offline
Run proto_install.sh….
19) ROP_Gridware.inst
It's worth pointing out, since it was brought up, that the rather old tools I posted on odforce are still useful I suppose, but severely duct-taped and out of date with the latest SGE(v6). Personally, I would download those and perhaps read the README, but not use those scripts. A far more elegant solution(IMHO ) is the gren.py script I uploaded to the Assett Exchange…it's one script that does everything they do, and more, far less messily, and eminently more readable if you're a Pythoner.
Since I was asked offlist by a couple of people recently just how hard it is to get SGE up and running on unixy systems, anyway, I'll mention that it requires the following:
1. Install vanilla SGE v6 from the link Jeff mentioned above. This requires installing first a master system(which should really be dedicated to running the grid and can't be an execute system), and then the executable clients on the others. The only fiddling you need to do is add a couple of entries to the /etc/services file(see install instructions), and if you're running v2.6 kernels on Linux, you'll probably still need to hack around an install issue which doesn't recognize them properly. This may have been fixed, but the hack involves linking the lx24 dirs to lx26 dirs, and one small change to the utilbin/arch script, to accept lx26 kernels. Anyway, it's in the message archives, but try it first, they may have fixed it, but don't be discouraged if it breaks first. Trivial, silly thing that's easy to fix.
2. Set up two “complexes”, which is another word for licensing tokens, called “hscript” and “mantra”. Add resources to the execute machines for those complexes based on the number of licenses you have. This can vary based on your setup…here we have limited rman and hscript tokens, but lots of mantra. We have rman setup as a global, consumable complex - meaning it can run anywhere as long as there's “n” licenses still free. hscript we run on specific systems - the workstations with dual procs(since once you have a full license running, you can run as many as you want), and any render-only hscript tokens. Mantra is everywhere. There's a lot of flexibility in how you setup your licensing.
3.Read up a bit on things, and you're good to go.
I haven't used the gridware ROP - seemed to me the initial install I looked at didn't let you submit hscript tasks to the grid which could in turn start spawning mantra renders, but that may have changed. That's where the real speed starts coming out - jobs spawning jobs. However, it seemed to work, and might be a better starting point if you're not really a scripting sort of person and want a builtin solution.
Cheers,
J.C.
19) ROP_Gridware.inst
It's worth pointing out, since it was brought up, that the rather old tools I posted on odforce are still useful I suppose, but severely duct-taped and out of date with the latest SGE(v6). Personally, I would download those and perhaps read the README, but not use those scripts. A far more elegant solution(IMHO ) is the gren.py script I uploaded to the Assett Exchange…it's one script that does everything they do, and more, far less messily, and eminently more readable if you're a Pythoner.
Since I was asked offlist by a couple of people recently just how hard it is to get SGE up and running on unixy systems, anyway, I'll mention that it requires the following:
1. Install vanilla SGE v6 from the link Jeff mentioned above. This requires installing first a master system(which should really be dedicated to running the grid and can't be an execute system), and then the executable clients on the others. The only fiddling you need to do is add a couple of entries to the /etc/services file(see install instructions), and if you're running v2.6 kernels on Linux, you'll probably still need to hack around an install issue which doesn't recognize them properly. This may have been fixed, but the hack involves linking the lx24 dirs to lx26 dirs, and one small change to the utilbin/arch script, to accept lx26 kernels. Anyway, it's in the message archives, but try it first, they may have fixed it, but don't be discouraged if it breaks first. Trivial, silly thing that's easy to fix.
2. Set up two “complexes”, which is another word for licensing tokens, called “hscript” and “mantra”. Add resources to the execute machines for those complexes based on the number of licenses you have. This can vary based on your setup…here we have limited rman and hscript tokens, but lots of mantra. We have rman setup as a global, consumable complex - meaning it can run anywhere as long as there's “n” licenses still free. hscript we run on specific systems - the workstations with dual procs(since once you have a full license running, you can run as many as you want), and any render-only hscript tokens. Mantra is everywhere. There's a lot of flexibility in how you setup your licensing.
3.Read up a bit on things, and you're good to go.
I haven't used the gridware ROP - seemed to me the initial install I looked at didn't let you submit hscript tasks to the grid which could in turn start spawning mantra renders, but that may have changed. That's where the real speed starts coming out - jobs spawning jobs. However, it seemed to work, and might be a better starting point if you're not really a scripting sort of person and want a builtin solution.
Cheers,
J.C.
John Coldrick
-
- Quick Links