Is there a way via Python to cook only a single work item on a node in a PDG graph? I'm working on improving how PDG is handled on the farm and struggling to find the appropriate method.
TopNode [www.sidefx.com] only has cookWorkItems() which doesn't take an index.
Cook a single work item via Python
2962 1 1- blented
- Member
- 61 posts
- Joined: 10月 2013
- Offline
- blented
- Member
- 61 posts
- Joined: 10月 2013
- Offline
Quite a bit of digging to make this happen, figured I'd leave the answer here for posterity per usual
First, you'll need to use a python scheduler instead of the regular local scheduler to cook things.
The defaults are all fine, just add these lines right after the imports in the scheduling tab.
This will make PDG skip any work items that aren't meant to be cooking. Without this, you'll end up re-cooking all the upstream items on the farm, even if they've already been cooked by prior dependent jobs.
Be sure to update the default scheduler on your topnet.
Next, to actually cook a single work item, you'll want this bit of code. The comments explain it further but essentially graphContext.cookItems was the only function I could find that actually took individual items to cook, so we use that alongside setting PDG_ACTIVE_WORK_ITEM on the environment such that everything else auto-succeeds.
All of this lets us run PDG jobs on the farm similar to how ROPs jobs would run, but with all the great features that come w/ PDG.
First, you'll need to use a python scheduler instead of the regular local scheduler to cook things.
The defaults are all fine, just add these lines right after the imports in the scheduling tab.
# auto-succeed if this isn't the work item we're meant to be cooking if os.environ['PDG_ACTIVE_WORK_ITEM'] != str(work_item.id): return pdg.scheduleResult.CookSucceeded
This will make PDG skip any work items that aren't meant to be cooking. Without this, you'll end up re-cooking all the upstream items on the farm, even if they've already been cooked by prior dependent jobs.
Be sure to update the default scheduler on your topnet.
Next, to actually cook a single work item, you'll want this bit of code. The comments explain it further but essentially graphContext.cookItems was the only function I could find that actually took individual items to cook, so we use that alongside setting PDG_ACTIVE_WORK_ITEM on the environment such that everything else auto-succeeds.
import hou import os def cookWorkItem(node, index, block=True): # generate static work items for this node, which will # generate parents as needed # likely that this only really works with static work items node.generateStaticWorkItems(block) # info about the work item we're cooking pdgNode = node.getPDGNode() context = pdgNode.context workItem = pdgNode.workItems[index] print('cooking:', workItem.id) # set the active work item as an environment variable os.environ['PDG_ACTIVE_WORK_ITEM'] = str(workItem.id) # use the context to cook this work item # our custom onSchedule function in the python schedule # will skip anything that's not this PDG_ACTIVE_WORK_ITEM return context.cookItems( block, [workItem.id], pdgNode.name)
All of this lets us run PDG jobs on the farm similar to how ROPs jobs would run, but with all the great features that come w/ PDG.
Grant Miller
VFX Supervisor
Ingenuity Studios
VFX Supervisor
Ingenuity Studios
-
- Quick Links