Please help me understand PDGDeadline

   4618   31   6
User Avatar
Member
8810 posts
Joined: July 2007
Offline
Are the partitions you are talking about something on Deadline side?
Since if you mean wrapping all work items into a single partition on TOP side then I imagine you lose granular dependencies on input workitems essentially becoming wait for all, which can be undesirable in many scenarios and also is not very friendly towards true dynamic workitems or mixed with other type of partitioning since this sounds like otherwise unnecessary additional partition wrapper and I dont know if you can nest partitions
Tomas Slancik
FX Supervisor
Method Studios, NY
User Avatar
Member
40 posts
Joined: May 2019
Offline
Indeed we mean waiting for all/part of upstream tasks. Partitions could contain batches that are directly translated to batches in Deadline. The way Deadline works, it takes a frame range and a path template, and manages itself to determine job tasks.
It's true that we might lose some efficiencies this way, but it will be solid, an avoid the race condition of dynamically producing DL tasks.
Edited by monomon - Oct. 8, 2024 03:57:20
User Avatar
Member
85 posts
Joined: Nov. 2017
Offline
Partitions are on the houdini side - when we have work items that are per frame, we can partition them into work items that are per job. Deadline needs the frame range for the job, we can provide that from a work item attribute.
User Avatar
Staff
1282 posts
Joined: July 2005
Offline
Hi All,

Just an update. We recently had discussions with AWS Thinkbox regarding the challenges that Houdini users are facing with the new H20.5 PDG Deadline Scheduler. In light of these discussions as well as new limitations reported by users, we have decided to add a toggle to the PDG Deadline Scheduler that will enable you to switch from the H20.0 behaviour of one-job-of-many-tasks (will be the default) and the H20.5 behaviour of one-batch-of-many-jobs.

Our aim is to roll the new toggle into an upcoming H20.5 production build update, hopefully some time near the end of November or the beginning of December.

Cheers,
Rob
User Avatar
Member
28 posts
Joined: Aug. 2017
Offline
That's awesome, I'm definitely going to use the toggle and try my luck with the old approach. It's good to have both solutions.
Edited by alexmajewski - Nov. 1, 2024 06:36:51
User Avatar
Member
10 posts
Joined: Aug. 2017
Offline
rvinluan
Hi All,

Just an update. We recently had discussions with AWS Thinkbox regarding the challenges that Houdini users are facing with the new H20.5 PDG Deadline Scheduler. In light of these discussions as well as new limitations reported by users, we have decided to add a toggle to the PDG Deadline Scheduler that will enable you to switch from the H20.0 behaviour of one-job-of-many-tasks (will be the default) and the H20.5 behaviour of one-batch-of-many-jobs.

Our aim is to roll the new toggle into an upcoming H20.5 production build update, hopefully some time near the end of November or the beginning of December.

Cheers,
Rob

Hey @rvinluan,

Could I ask you a question regarding deadlinescheduler speeds compared to localscheduler, since you seem to be the right person to ask this type a question.

I am comparing a very fast axiom sim that takes 5 sec to simulate for 240 frames, with localscheduler via TOPs the sim takes around 10-20 seconds while the same type of sim takes 1.5 minutes via deadlinescheduler. The difference is that local scheduler prepares 20-30 tasks on the fly (at least it looks like this in the UI), while deadline scheduler always has 1-2 item overhead and my 5 sec sim turns into 1 minute and 40 second one to be precise, is this supposed to work like this? I don't use "Cook Frames As Single Work Item" for the sim here on purpose as an example of speed between the schedulers. I am just wondering if it's supposed to be like this?
I am on H20.5.410 and Deadline 10.4 btw.




Thanks in advance!
Edited by lavrenovlad - Dec. 7, 2024 13:08:36

Attachments:
localscheduler.png (140.2 KB)
deadlinescheduler.png (140.1 KB)

User Avatar
Member
85 posts
Joined: Nov. 2017
Offline
You can try running the axiom sim as a single workitem, so it goes to a single machine on the farm. You can use the frames per batch in the ROPFetch for that.
User Avatar
Member
10 posts
Joined: Aug. 2017
Offline
HristoVelev
You can try running the axiom sim as a single workitem, so it goes to a single machine on the farm. You can use the frames per batch in the ROPFetch for that.

Yeah I know that, that works pretty well for sims, I was just testing generally the speeds of both and was asking if it's normal to have that difference, and if it's not something on my side that I set up incorrectly etc.? My deadline set up locally no online delays nothing so I'd assume it'd be fast.
Edited by lavrenovlad - Dec. 9, 2024 07:18:39
User Avatar
Member
85 posts
Joined: Nov. 2017
Offline
Each task boots up a new houdini process, so for short tasks the overhead is significant.
User Avatar
Staff
1282 posts
Joined: July 2005
Offline
lavrenovlad
Hey @rvinluan,

Could I ask you a question regarding deadlinescheduler speeds compared to localscheduler, since you seem to be the right person to ask this type a question.

I am comparing a very fast axiom sim that takes 5 sec to simulate for 240 frames, with localscheduler via TOPs the sim takes around 10-20 seconds while the same type of sim takes 1.5 minutes via deadlinescheduler. The difference is that local scheduler prepares 20-30 tasks on the fly (at least it looks like this in the UI), while deadline scheduler always has 1-2 item overhead and my 5 sec sim turns into 1 minute and 40 second one to be precise, is this supposed to work like this? I don't use "Cook Frames As Single Work Item" for the sim here on purpose as an example of speed between the schedulers. I am just wondering if it's supposed to be like this?
I am on H20.5.410 and Deadline 10.4 btw.


Thanks in advance!

As @HristoVelev mentioned, each task boots up a Houdini process, which adds overhead to the overall time and can be relatively significant for short tasks. You can batch tasks/frames together or use PDG Services (https://www.sidefx.com/docs/houdini/tops/services.html) to help reduce the overhead attributed to starting up processes.

In general, I would expect Deadline scheduling to be slower than Local scheduling. There is overhead with submitting jobs, waiting for Deadline to provision and assign worker nodes to tasks, and then waiting for the workers to pick up the tasks and execute the task commands. I can't really say how much slower, it varies, but it's definitely slower.

Judging by your attached screenshots, it looks like you may only have 1-2 Deadline workers on the farm compared to many "slots" available when performing local scheduling. Note that the number of concurrent tasks is determined by the Total Slotsparameter on the Local Scheduler TOP node (https://www.sidefx.com/docs/houdini/nodes/top/localscheduler.html#maxprocsmenu) for local scheduling and by the number of available Deadline workers for Deadline scheduling. There is a Concurrent Tasksparameter on the Deadline Scheduler TOP node (https://www.sidefx.com/docs/houdini/nodes/top/deadlinescheduler.html#deadline_concurrenttasks) that you can set to control or increase the number of tasks running concurrently on your workers but it's currently broken at the moment (I'm working on a fix).

Cheers,
Rob
User Avatar
Staff
1282 posts
Joined: July 2005
Offline
While I'm on here, I'll provide an update to the Deadline scheduling changes I mentioned earlier in this forum thread. I've added a new toggle parameter to the Deadline Scheduler TOP node to control whether you want pre-Houdini 20.5 scheduling of one-job-of-many-tasks or the new Houdini 20.5 scheduling of one-batch-of-many-jobs. The changes are currently in an in-house development build and are undergoing testing. I'm hoping to roll the changes into a Houdini 20.5 build very soon.

With pre-Houdini 20.5 scheduling, the Concurrent Tasksparameter will work once again.

Cheers,
Rob
User Avatar
Member
85 posts
Joined: Nov. 2017
Offline
Great, looking forward to that build!
  • Quick Links