Adam Ferestad
Adam F
About Me
EXPERTISE
Technical Director
INDUSTRY
Film/TV
Connect
LOCATION
United States
WEBSITE
Houdini Skills
Availability
Not Specified
Recent Forum Posts
[Problem] Partitions being removed by switch TOP? March 26, 2025, 9:56 p.m.
I am trying to build out a tool which uses partitions to put together work items, but for some reason the partition is lost the instant it moves through a switch node (and I'm sure many others). I am using the switch so I can give the users multiple options for defining how the partitioning is done, but losing the partitions at the switch means that I struggle to get anything done correctly down the line.
Did a little playing to see if I could come up with a convoluted scheme to partition the work items, add an attribute to them to store the id for the partition, then expand before the switch so I could repartition them down the line, but even that seems to break it.
How does one work with partitions after the work items are partitioned without them immediately breaking?
Did a little playing to see if I could come up with a convoluted scheme to partition the work items, add an attribute to them to store the id for the partition, then expand before the switch so I could repartition them down the line, but even that seems to break it.
How does one work with partitions after the work items are partitioned without them immediately breaking?
Using hou library in VSCode? March 25, 2024, 5:01 p.m.
I am trying to use VSCode to do some Houdini tool development and I am going to need to access a bunch of hou modules, but I cannot seem to get it working right. I found this video [youtu.be] which seems to address the issue, but I have followed it to the t and I am still getting a reportMissingImports error on the import hou line. What am I doing wrong here?


Determining where a cook command comes from? May 19, 2023, 1:23 p.m.
I am trying to find a way to determine if a node cook is being initiated on the node itself vs requested from a downstream node. I have a feeling that there is something to do with event handlers, but can't seem to figure out how to get them to trigger properly. I am trying to either:
I feel like there should be a way to glean the information, but just can't seem to find it. I do have something in place if I cannot automate this, but I REALLY want to automate this so I can have it in my toolbox for the future.
Here is the onGenerate() that I am using in a python processor currently:
This works about halfway to what I need. When I cook without the outputs present or without the JSON files for the work items, it processes writeWorkItemJSONs and has the work item dependencies going all the way up the chain. If the JSON files are present and the outputs are there, the work items are created from the JSONs and lack the dependencies, but the input node is still cooked. I need a way to prevent the nodes above this processor from cooking in the loadWorkItemJSONs() function. I saw that the processor is supposed to have an onPreCook() method I can invoke (here [www.sidefx.com]), but the documentation is a little light on details.
I also found that there is a filter on the pdg.graphContext that is supposed to affect the graph cooking, but I'm trying to figure out how to use it and where it would be best to do so. I have some ideas, and will report back if anything comes of them.
- Flip a switch TOP based on the cooking request source
- Adjust how a Python Processor works based on the result request source
I feel like there should be a way to glean the information, but just can't seem to find it. I do have something in place if I cannot automate this, but I REALLY want to automate this so I can have it in my toolbox for the future.
Here is the onGenerate() that I am using in a python processor currently:
import json, hou, os def writeWorkItemJSONs(JSONPath, context): try: os.mkdir(JSONPath) except Exception as e: pass for upstream_item in upstream_items: jsonWorkItem = context.serializeWorkItemToJSON(upstream_item) with open(f'{JSONPath}/WorkItem.{upstream_item.index}.json', "w") as f: f.write(jsonWorkItem) new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True) def loadWorkItemJSONs(JSONPath): for file in os.listdir(JSONPath): new_item = item_holder.addWorkItemFromJSONFile(f'{JSONPath}/{file}') context = self.context hipPath = "/".join(hou.hipFile.path().split('/')[:-1]) JSONPath = hipPath + '/workItem_JSON' try: if len(os.listdir(JSONPath)) > 0: loadWorkItemJSONs(JSONPath) else: writeWorkItemJSONs(JSONPath, context) except Exeption as e: writeWorkItemJSONs(JSONPath, context)
This works about halfway to what I need. When I cook without the outputs present or without the JSON files for the work items, it processes writeWorkItemJSONs and has the work item dependencies going all the way up the chain. If the JSON files are present and the outputs are there, the work items are created from the JSONs and lack the dependencies, but the input node is still cooked. I need a way to prevent the nodes above this processor from cooking in the loadWorkItemJSONs() function. I saw that the processor is supposed to have an onPreCook() method I can invoke (here [www.sidefx.com]), but the documentation is a little light on details.
I also found that there is a filter on the pdg.graphContext that is supposed to affect the graph cooking, but I'm trying to figure out how to use it and where it would be best to do so. I have some ideas, and will report back if anything comes of them.