I work at an Educational Institute that is using Houdini for teaching and we are wanting to provide a rendering facility/render farm/distributed rendering system for students and am wondering what options people would recommend. It needs to be able to work with plug-ins such as Arnold and Redshift.
We have trialed HQueue and it worked OK up to a point but the render machines that were being used had entry level NVidia graphics cards with only 2GB of VRAM so the renders on more complex frames kept failing due to running out of graphics memory.
We are open to any and all options:
- Local render farm (Minimum 10 nodes)
- Distributed rendering for inactive machines on the network. Being an educational institute we have a lot of computers of varying performance levels and graphics cards which are not constantly in use so are idle quite often.
- Cloud rendering
- Amazon Web Services
- Microsoft Cloud
- Gridmarkets
- Any other options
The process for submitting renders needs to be easy for students to use and manage.
Appreciate feedback and recommendations of a suitable long-term solution.
Recommendations for Rendering facilities for education
1115 2 1- FAD-IT-Technicians
- Member
- 3 posts
- Joined: May 2017
- Offline
- geordiemartinez
- Member
- 45 posts
- Joined: April 2016
- Offline
- SWest
- Member
- 313 posts
- Joined: Oct. 2016
- Offline
Hi, there is not much info, but I'll try to give some generic advice. I designed much of the lab environment in my dept for a high school program related with IT and networks (not primarily CG). Rather than paying for cloud services we've built the lab floor (it is a couple of rooms) ourselves from the ground up. The benefits of that is first cost savings over time and access. Furthermore, when you build something yourself from the ground up you also should be able to fix problems and add features with time. One requirement I've been using is to have a minimum hardware requirement for all machines, for example a minimum 8 GB RAM and 500 GB storage. Everything that does not meet that minimum level is upgraded or used for peripheral labs.
On top of that we've installed a host monitor for "cloud" hosts, so we can detect if they loose network access before we need them. This reduces frustration.
If you have, say 10, machines with better GPU cards (4 GB and up) you could design hosts A with them. Other weaker machines become hosts B (for example another 10). You could also have host groups A to C, where A might have 8 GB GPU:s.
So whenever someone want to submit a rendering job they first determine the requirement for GPU and RAM and make sure to submit to the right pool of hosts. You could basically name each pool what the minimum hardware requirements are, for example group A has 32 GB RAM and 8 GB GPU:s or better.
That should solve the problem you had with HQueue that you mentioned.
Cheers!
On top of that we've installed a host monitor for "cloud" hosts, so we can detect if they loose network access before we need them. This reduces frustration.
If you have, say 10, machines with better GPU cards (4 GB and up) you could design hosts A with them. Other weaker machines become hosts B (for example another 10). You could also have host groups A to C, where A might have 8 GB GPU:s.
So whenever someone want to submit a rendering job they first determine the requirement for GPU and RAM and make sure to submit to the right pool of hosts. You could basically name each pool what the minimum hardware requirements are, for example group A has 32 GB RAM and 8 GB GPU:s or better.
That should solve the problem you had with HQueue that you mentioned.
Cheers!
Interested in character concepts, modeling, rigging, and animation. Related tool dev with Py and VEX.
-
- Quick Links