Send one job per work item to deadline in a PDG network

   821   5   3
User Avatar
Member
3 posts
Joined: Nov. 2021
Offline
Hi!

I'm using Houdini 20.5 and need to submit a PDG network to a Deadline render farm. The farm is working, and I've successfully managed to send the TOP network to be executed as a job on the farm.

However, by default, it submits everything as a single job, with all work items running within that job on the same machine. Is there a way to configure the Deadline Scheduler so that each work item is submitted and executed as a separate job, potentially running on different machines? I've been searching for a solution but haven't been able to figure it out.

Thank you for all!
User Avatar
Member
82 posts
Joined: Nov. 2017
Offline
I'm interested in that too. Tried partitioning the work items - it would be great if a partition is a job, and a work item inside is a task. Tried batches in the ROP to see if a batch would be a job and frames would be tasks - still no luck.
User Avatar
Member
18 posts
Joined: July 2018
Online
Ran into similar problems previously, and tried to make a custom tool that saves each workitem as individual hip file and submit to deadline one by one automatically. Though it works somehow but still buggy. I would say for PDG farm integration the HQueue works best.
User Avatar
Member
82 posts
Joined: Nov. 2017
Offline
Yeah that's also an option. Overhead with maintaining two farm managers I'll think about it.
User Avatar
Member
3 posts
Joined: Nov. 2021
Offline
Hi!
After some trial and error i found a solution. It's very simple, but it can be a little confusing at first.
I'm using Houdini 20.5, in previous versions the behaviour might be different, as indicated in the second note in the documentation page (Deadline Scheduler TOP docs [www.sidefx.com]

What i wanted was for each work item to be executed as a separate job on the farm. For that, I need to set the Default Scheduler of the topnet to a deadline scheduler (the one inside the network) You need to check the Inherit Local Environment in the deadline scheduler (Job Params tab > Deadline Command Environment)

With those two settings, when the top network is cooked, each work item is sent to the farm as a separate job.

I hope this helps if any one is having a similar problem.
User Avatar
Member
82 posts
Joined: Nov. 2017
Offline
If you want to keep everything on the farm, you can do a 'submit as job' of the root node of your network. It will execute that job on the farm and keep it running to manage the rest of the jobs, until it's all done.
  • Quick Links