Houdini Engine 7.0
|
Houdini Engine looks for the following environment variables when initializing:
HAPI_LICENSE_MODE:
default
will have Houdini Engine look first for a Houdini Engine license and if one is not found try to acquire an interactive Houdini or Houdini FX license. This does mean that if the licenses are shared on a common license server that a single individual (the same machine) can acquire both a Houdini FX license and a Houdini Engine license even though Houdini Engine should technically be able to use that same Houdini FX license.engine_only
will have Houdini Engine only accept Houdini Engine licenses and will fail the license check if only interactive Houdini or Houdini FX licenses are found.interactive_only
will have Houdini Engine only accept interactive Houdini or Houdini FX licenses and will fail the license check if only Houdini Engine licenses are found.houdini_escape_only
will have Houdini Engine only accept Houdini Escape licenses and will fail the license check even if other licenses, like Houdini FX or Houdini Engine, are found.houdini_fx_only
will have Houdini Engine only accept Houdini FX licenses and will fail the license check even if other licenses, like Houdini Escape or Houdini Engine, are found.HFS:
(Houdini File System) Set this to the path to the Houdini install folder. This variable is used to locate the HAPI dlls when calling HAPI_Initialize() or HAPI_CreateInProcessSession().HAPI_CLIENT_NAME:
Set to the name of the client that invoked engine: for maya, 3dsmax, unreal and unity. Not set in houdini itself. Your hda can use this variable to check to see if it's been loaded for one of these clients. When Houdini Engine wants to return a string to the host application it doesn't do it directly. Instead, it gives the calling function a HAPI_StringHandle to the string it wants to return. The host application can then call HAPI_GetStringBufLength() to get the size of the buffer required to hold the string. After it has allocated the appropriate size buffer, the string can be retrieved with a call to HAPI_GetString().
Every function in Houdini Engine returns a value of HAPI_Result. Any return value other than HAPI_RESULT_SUCCESS indicates some form of failure. You can get the specifics of the problem by first calling HAPI_GetStatusStringBufLength() and then HAPI_GetStatusString().
The status_type
argument for both functions should be set to HAPI_STATUS_CALL_RESULT. We can then extract a string with these functions that gives us further information about the problem.
The sample function below will retrieve the status of most calls into the API, with the exception of the asynchronous calls like HAPI_CookNode() and HAPI_CreateNode():
It is advisable to wrap the function call in a macro such as the following:
Cooks and other asynchronous calls use a different set of results, separate from the regular API call results. To get the results of an asynchronous function you need to call HAPI_GetStatusStringBufLength() with status_type
set to HAPI_STATUS_COOK_RESULT and an appropriate HAPI_StatusVerbosity.
The sample function below will retrieve the status of asynchronous call:
Note that in threaded mode it will NOT be sufficient to check the return status of asynchronous functions like HAPI_CookNode() and HAPI_CreateNode(), as these will return immediately with HAPI_RESULT_SUCCESS. Instead, you must rely on the result of HAPI_GetStatus(), with HAPI_STATUS_COOK_STATE as the status_type
. See section on Cooking.
Nodes will be parsed to retrieve node errors, warnings and messages ONLY during the call to HAPI_GetStatusStringBufLength() and ONLY for the previously cooked node(s). It's up to you to decide when to call for cook results and with what verbosity. The general rule of thumb is to call it when the cook has failed (HAPI_STATE_READY_WITH_COOK_ERRORS is returned as the READY cook state) or no/invalid geometry has been retrieved. Note that you can get output from Python code within your node using this mechanism. If the Python code raises warnings or errors, the text of the warning/error can be retrieved this way. To raise errors, you can simply throw or call hou.NodeError()
, but to raise warnings, you must call hou.NodeWarning()
The first thing that needs to be done when using HAPI is to call HAPI_Initialize(). This function resides within libHAPI.dll
(as do all other HAPI functions). Along with the rest of the Houdini dlls, libHAPI.dll
is located inside:
This bin
folder needs to be on your PATH
in order for all the dlls to load properly. Your application only needs to link against libHAPI.dll
. For the Unity implementation, we figure out where Houdini was installed by version information and the registry, and we augment the process environment just before making the call to:
You can choose any method that is appropriate for your application.
This is the first time you need to decide what type of session you wish to use. If you just want the default in-process session just pass NULL
for the session
argument. Otherwise, you must create a session first using one of the session creation APIs. See Sessions for more on session.
The cook_options
will set the global HAPI_CookOptions for all subsequent cooks. You can also overwrite these options for individual cooks when cooking explicitly but there are cooks that rely on these global options. For example, if you create a node with the option to automatically cook that node right after instantiation then that cook will use these global options. One option of particular importance is the HAPI_CookOptions::maxVerticesPerPrimitive setting. For example, if this were set to 3, then all output will be triangles. When set to 4, the output can be both triangles and quads. To comply with the settings, HAPI will convert more complex geometry into less complex forms that satisfy the constraints, through convexing. To avoid convexing at all, set HAPI_CookOptions::maxVerticesPerPrimitive to -1
.
The use_cooking_thread
argument indicates whether HAPI should use separate thread for cooking nodes. This is generally a good idea, as doing so would free up the calling thread to other tasks, including displaying the current status of the evaluation to users.
Assuming you choose to use a separate cooking thread, cooking_thread_stack_size
lets you choose the stack size of the evaluation thread in bytes. Generally, a larger value is advised, as some nodes are quite complex and require a deep stack space in order to evaluate properly. For reference, Houdini internally reserves 64MB of stack space. Values smaller than this generally work, but if pushed to extremes (1-2mb) stack overflow issues can easily result. A value of -1 for this argument sets the stack to use the Houdini default size. This is the recommended setting if your application can afford the space.
If want to have absolute control over the environment in which HAPI runs, you can override some or all of it using the houdini_environment_files
argument. This is a list of paths, separated by a ";" on Windows and a ":" on Linux and Mac, to .env files that follow the same syntax as the houdini.env file in Houdini's user prefs folder. These will be applied after the default
houdini.env
file and will overwrite the process' environment variable values. You an use this to enforce a stricter environment when running engine. For more info, see: http://www.sidefx.com/docs/houdini20.5/basics/config_env
Some assets require sub-assets to work. The otl_search_path
argument refers to the directory where additional OTL files would be searched for. For our Unity plugin, this is set to something similar to:
The dso_search_path
argument basically sets the HOUDINI_DSO_PATH
environment variable. It is there so Houdini Engine can take advantage of custom plugins that you may write. If you do not need to take advantage of this feature, you may simply pass NULL for this argument. You can read more about this variable here: http://www.sidefx.com/docs/hdk20.5/_h_d_k__intro__creating_plugins.html
The image_dso_search_path
and audio_dso_search_path
parmeters are similar to dso_search_path
but they set the dedicated HOUDINI_IMAGE_DSO_PATH
and HOUDINI_AUDIO_DSO_PATH
respectively. Image and Audio plugins are searched in seperate locations than regular node plugins.
Once you make the call to HAPI_Initialize(), all required Houdini dlls will be loaded, and a Houdini scene will be setup underneath the covers. Because Houdini utilizes many dlls, and some of these dlls are public ones that your application may also be using, clashes could result. In general this one call is probably the most difficult part of the entire integration process, where we need to identify potentially clashing dlls and resolve them.
Another note is that HAPI_Initialize() will also be the point at which pythonrc.py
, 123.py[cmd]
, and
scripts get run. All of these scripts can live in the 456.py
[cmd]Houdini20.5
folder in your home directory. In there, pythonrc.py
should be in python2.7libs
, while the 123 and 456 scripts should be in scripts
. These are scripts you use to define your environment variables, import helper modules, write to files, among other things. For Houdini Engine, it is recommended you use pythonrc.py
. More information can be found here: http://www.sidefx.com/docs/houdini20.5/hom/independent
The mirror call to HAPI_Initialize() is HAPI_Cleanup(), which will cleanup the scene underneath. Note, HAPI_Cleanup() will not release any licenses acquired by the process. You'll need to restart the process to release the licenses.
For debugging purposes, it can be very handy to save and load hip files (hip files are Houdini scene files). You can use HAPI_SaveHIPFile() to save the current underlying Houdini scene from Houdini Engine. You can then open this scene file in an interactive session of Houdini to see what Houdini Engine created from your API calls.
For further debugging help, you can use the lock_nodes
argument to lock all SOP nodes before saving the scene file. This way, when you load the scene file you can see exactly the state of each SOP at the time it was saved instead of relying on the re-cook to accurately reproduce the state. It does, however, take a lot more space and time locking all nodes like this.
Note that the way the loaded OTLs are tied to the saved Houdini scene differs depending on which APIs were used to load the OTLs. This is decided when loading an OTL and not when saving the scene using HAPI_SaveHIPFile(). See Asset Library Files for details on how to load asset libraries but basically:
You can see how OTLs are referenced by the saved HIP file in Houdini by going to Windows > Operator Type Manager, selecting the Operators tab, and expanding all the way to Operator Type Libraries > Current HIP File.
You can see where an specific asset is coming from in Houdini by going to Windows > Operator Type Manager, selecting the Configuration tab, and setting the Operator Type Bar to Display Menu of All Definitions. Then, selecting any asset node will display an Asset Name and Path section above all of its arguments. If you have multiple versions of the OTL loaded at the same time (one being embedded and one being reference by path, for example) you can change between them using these controls.
You can get or set any arbitrary environment variable on the server-side process. For in-process sessions this will be the current process, otherwise this will be for the HARS process wherever that may live. You can do this via these APIs:
Here's an example of a simple set-and-get:
You can check for specific errors by error code across an entire node tree. These checks can be expansive so a separate function was created so the called can determine when the cost is worth the benefit.
The function is HAPI_CheckForSpecificErrors(). It takes a bitfield (HAPI_ErrorCodeBits) of HAPI_ErrorCode that specifics which errors to look for and then returns a HAPI_ErrorCodeBits bitfield that specifies which errors were found. It does this by recursively looking at each node's error messages for all the nodes under the given HAPI_NodeId.
Here's an example that checks for HAPI_ERRORCODE_ASSET_DEF_NOT_FOUND warnings: