Adam Ferestad

Adam F

About Me

専門知識
Technical Director
業界:
Film/TV

Connect

LOCATION
United States
ウェブサイト

Houdini Engine

Availability

Not Specified

Recent Forum Posts

Using hou library in VSCode? 2024年3月25日17:01

I am trying to use VSCode to do some Houdini tool development and I am going to need to access a bunch of hou modules, but I cannot seem to get it working right. I found this video [youtu.be] which seems to address the issue, but I have followed it to the t and I am still getting a reportMissingImports error on the import hou line. What am I doing wrong here?


Determining where a cook command comes from? 2023年5月19日13:23

I am trying to find a way to determine if a node cook is being initiated on the node itself vs requested from a downstream node. I have a feeling that there is something to do with event handlers, but can't seem to figure out how to get them to trigger properly. I am trying to either:
  1. Flip a switch TOP based on the cooking request source
  2. Adjust how a Python Processor works based on the result request source

I feel like there should be a way to glean the information, but just can't seem to find it. I do have something in place if I cannot automate this, but I REALLY want to automate this so I can have it in my toolbox for the future.

Here is the onGenerate() that I am using in a python processor currently:
import json, hou, os

def writeWorkItemJSONs(JSONPath, context):
    try:
        os.mkdir(JSONPath)
    except Exception as e:
        pass
    
    for upstream_item in upstream_items:
        jsonWorkItem = context.serializeWorkItemToJSON(upstream_item)
        with open(f'{JSONPath}/WorkItem.{upstream_item.index}.json', "w") as f:
            f.write(jsonWorkItem)
        new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True)
        
def loadWorkItemJSONs(JSONPath):
    for file in os.listdir(JSONPath):
        new_item = item_holder.addWorkItemFromJSONFile(f'{JSONPath}/{file}')

context = self.context
hipPath = "/".join(hou.hipFile.path().split('/')[:-1])
JSONPath = hipPath + '/workItem_JSON'

try:
    if len(os.listdir(JSONPath)) > 0:
        loadWorkItemJSONs(JSONPath)
    else:
        writeWorkItemJSONs(JSONPath, context)
except Exeption as e:
    writeWorkItemJSONs(JSONPath, context)

This works about halfway to what I need. When I cook without the outputs present or without the JSON files for the work items, it processes writeWorkItemJSONs and has the work item dependencies going all the way up the chain. If the JSON files are present and the outputs are there, the work items are created from the JSONs and lack the dependencies, but the input node is still cooked. I need a way to prevent the nodes above this processor from cooking in the loadWorkItemJSONs() function. I saw that the processor is supposed to have an onPreCook() method I can invoke (here [www.sidefx.com]), but the documentation is a little light on details.

I also found that there is a filter on the pdg.graphContext that is supposed to affect the graph cooking, but I'm trying to figure out how to use it and where it would be best to do so. I have some ideas, and will report back if anything comes of them.

Is there a way to stop VDB Advect pruning activated regions 2023年4月19日13:35

Soothsayer
You can change the vdb vector type with a primitive properties node.

THANK YOU! That did take care of being able to use the VDB Combine node to force the maximum size of the VDB grid. I think I can work around it, but I would like to be able to eliminate that maximum. Any ideas there?