Houdini 20.5 Nodes Object nodes

Hair Generate 2.0 object node

Generates hair from a skin geometry and guide curves.

On this page
Since 17.0

Extracts the required skin and guide curves from a groom object such as a Guide Groom, Guide Deform or Guide Sim. These nodes can also write their groom data to files which can be imported by this node.

The generated hair can be customized using tools from the Guide Process and Guide Brushes shelves. These add SOP nodes to the editable network contained within this node.

Parameters

Groom Source

Loads the groom to generate hair for.

Note

Groom data written to and read from files is expected to be packed and named in a specific way. Groom nodes such as Guide Groom, Guide Deform or Guide Sim write data in the expected format.

These nodes use the Guide Pack SOP to create these groom packages and Guide Unpack SOP to unpack and extract bits from them.

Source Mode

Groom Object

Load the groom from a groom object.

Groom File

Load the groom from a file.

Groom Object

Load the groom data from this source object.

Groom File

Load the groom data from this file.

Group

Generate curves for this group of skin primitives.

Use Animation

Generate hair using the input groom’s animated skin and guides. When this is disabled, hair is generated in the rest position.

Material

The material to render generated hair with. This is typically a Hair Shader

Skin VDB

Options for generating or loading a VDB representation of the skin geometry. Many of the tools typically used within the Guide Groom object rely on this VDB volume to efficiently avoid skin penetration.

VDB Source

From Skin Geometry

Generate the VDB from the specified skin geometry.

SOP Geometry

Merge the VDB from a SOP.

File

Load the VDB from a file.

Parameters for From Skin Geometry:

Voxel Size

Generate a VDB Volume of the skin geometry using this voxel size.

The volume is used by SOP nodes to push curves out of the skin geometry.

Parameters for SOP Geometry mode:

SOP Path

Merge the skin VDB from this SOP Path.

Group

The group within the SOP geometry to use.

Parameters for File mode:

File

Load the skin VDB from this file path.

Group

The group within the file to use.

Tip

If the SOP or file contains multiple volumes, use @name==surface to fetch the surface VDB only, since this is the only volume used by the grooming tools.

General

Distribution

Density

Scatter hair or guides at this density.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Seed

The seed value used for scattering. Changing this generates a different random distribution with the same density.

Relax Iterations

The number of times to relax guide locations after scattering. Higher iterations result in a more even distribution of guides at the cost of more computation time.

Guide Interpolation

Use Guides

Blend between the shape of nearby guide curves to determine the shape of the each generated hair.

Assume Uniform Segment Count & Length

Use a faster guide interpolation algorithm, which requires that all guides have the same segment count.

Compute Weights Using Skin Coordinates

Compute guide weights using the hair’s position within its skin primitive. This is robust and accurate, but can look less natural than weight computation based on guide distance, which is used when this is disabled.

This can only be used when guides are located at each skin point.

Tip

Use the Guide Groom Object's Guide Per Point mode to create guides that work with this method.

Blending Method

Controls how guides that influence a generated curve are blended.

Linear Blend

Does a straight-forward linear blend between the guide curves.

Extrude And Blend

Extrudes the curve along each guide and then blends those extruded curves.

Guide Group

The group of guides to use.

Influence Radius

The radius within which guides have influence over generated hair.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Influence Decay

Controls how steeply the weight of a guide drops with distance. Higher values cause guides to have less influence over distant hairs, resulting in hair more closely following guides near them. Lower values tend to result in a smoother look as the effect of all guides is averaged.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Maximum Guide Count

The maximum number of guides to take into account. Any excess guides found within the Influence Radius are ignored.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Max Guide Angle

Ignore guides that face away from the skin normal at the hair root by more than this angle in degrees.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Clump Crossover

Blends the shapes of clumps of guide hair. At a value of 0, only the clump that exhibits the greatest influence of the generated hair is taken into account. Higher values cause surrounding clumps to be taken into account as well.

TIP: Guide hair clumps can be created using the Hair Clump SOP.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Unguided Hairs

Grow Unguided Hair

Grows hair at points that are outside of the influence radius of any guides.

Use Initial Direction Attribute

Grow hair in the direction of a vector attribute on the skin geometry.

Initial Dir Attrib

The name of a vector attribute on the skin geometry used as the initial direction of unguided hair.

Segments

Generate hairs with this segment count.

Note

When there are any guides, this node generates hairs with the same segment count as those guides and this parameter is ignored.

Length

Generate unguided hair of this length.

This parameter can be overridden using an attribute or texture. To do this, select an option from the drop-down menu next to the parameter.

Minimum Length

The minimum length required to grow a hair. Any hair that would be shorter than this value is not grown at all.

Display

Display As Subdivision Curves

Subdivide curves in the viewport for a smooth appearance.

Static Generation

Perform Hair Generation and Editing at Rest

Generate hair in the rest position, without time-dependency where possible.

When this is enabled, the editable SOP network contained in this operator also operates on the hair at rest.

Deforms the hair using the guide curves afterward. This avoids any recooking of the contained groom operations with changing time. It is possible to cache a single frame using the Rest Cache controls below.

Using this method, rendering can be performed by simply loading the rest cache and deforming it using the stored weights.

Capture

Guide Coverage

Ensure that each point is captured & deformed by at least this many guides.

Compute Radius From Guide Coverage

Compute a radius automatically, such that each point will find roughly the number of guides specified by Guide Coverage.

This makes it easier to deform by guides that have varying density across the skin geometry. For example when using more guides in the face of a character.

Radius

When Compute Radius From Guide Coverage is disabled, this sets an absolute radius to use.

Limit Segments Per Guide

Limit the number of capture segments per guide. This helps manage the amount of memory (or disk space) the weight information takes up.

Segments Per Guide

The maximum number of segments that each point in the first input can be captured by.

Expand Radius for Uncaptured Points

Expands the radius for points that aren’t covered by the number of guides specified by Guide Coverage.

This allows using a small number of guides (or a small radius) overall, which is necessary to preserve finely detailed guide movement, while covering points that may otherwise not be captured by the required number of points.

Expansion Iterations

The maximum number of times the radius is expanded. The loop is stopped early if each point has been captured by the number of guide specified by Guide Coverage.

Expansion Factor

The factor by which the radius is multiplied in each expansion iterations.

Rest Cache

Load from Disk

Load the rest hair from disk. This is stored with the capture weights required for deformation.

Geometry File

The file to the store rest cache in.

Save to Disk

Save the rest cache to disk.

Deform

Attributes to Transform

Attributes on the first input’s geometry to deform.

Optimization

General

Bypass Editable SOP Network

Don’t cook the editable SOP network contained in this node. Depending on the SOPs contained inside, this can speed up hair generation substantially, which can be useful while visualizing the effect of certain parameters or changes in the groom source.

Limit To Bounding Box

Only grow hair from root points that are within the bounding boxes defined by the parameters below.

Center

The center of the bounding box.

Size

The size of the bounding box.

Prune

Prune a percentage of hair to speed up cooking.

Pruning Ratio

The percentage of curves to prune.

Thicken Remaining Hairs

Thicken the hairs left over after pruning to match the apparent density of the unpruned hair.

Adaptive Prune

Adaptive Prune

Remove hair curves based on how large they are within a camera’s view.

When elements become more distant, they are scaled down and eventually deleted.

Remaining elements are scaled up to preserve the visual density of the collection of elements as a whole.

Camera

The camera used for computing element distance.

Size Unit

The unit in which the size of each elements is specified.

Fraction Of Screen Width

Specify element size as a fraction of screen width. An element that fills the entire width of the screen has a size of 1.0.

Use this if you prefer your settings to be independent of the camera’s resolution.

Pixels

Specify element size in pixels. Use this if you prefer the pruning to depend on camera resolution. Setting a lower resolution on the camera will cause more elements to be deleted.

Size Threshold

The screen size at which pruning starts to occur.

Specified as a fraction of screen width or size in pixels, depending on the Size Unit setting.

Aggressiveness

Controls how aggressively elements are pruned as they become smaller than the threshold size.

Higher values cause elements to be pruned more rapidly as they become smaller.

Seed

The seed value used in addition to the Seed Attribute to randomize pruning per element.

Thickness

Thickness

The thickness of generated hair at its thickest point.

Thickness Ramp

The profile of thickness along the length of each hair.

Attributes

Skin Attribute Transfer

Point Attributes

A list of attributes to transfer from the skin geometry’s points to the hair primitives.

Uses the interpolated value at the skin location closest to each hair root.

Vertex Attributes

A list of attributes to transfer from the skin geometry’s vertices to the hair primitives.

Uses the interpolated value at the skin location closest to each hair root.

Primitive Attributes

A list of attributes to transfer from the skin geometry’s primitives to the hair primitives.

Uses the values of the primitive closest to the hair’s root location.

Detail Attributes

A list of attributes to transfer from the skin geometry’s detail to the hair primitives.

Guide Attribute Transfer

Point Attributes

A list of attributes to transfer from the guide points to the generate hair points.

Primitive Attributes

A list of attribute to transfer from the guide primitives to the generated hair primitives.

Output Attributes

Point Attributes

Point attributes to keep for rendering.

Vertex Attributes

Vertex attributes to keep for rendering.

Primitive Attributes

Primitive attributes to keep for rendering.

Detail Attributes

Detail attributes to keep for rendering.

Skin

Subdivision

Move Curves To Subdivision Surface

Moves the generated hair to subdivision limit surfaces. This should be enabled when the underlying skin geometry is rendered as a subdivision surface.

Mode

Match Skin Object

Moves curves to the skin’s limit surface when the skin geometry has subdivision rendering enabled.

Always On

Always move curves to the skin’s subdivision limit surface, regardless of whether subdivision is enabled on the skin.

Referenced Subdivision Values

Enabled

Displays whether subdivision is enabled on the skin geometry when subdivision mode is set to Match Skin Object.

Displacement

Displace Curves

Apply displacement along the skin normal to the generated curves. This can be used to match displacement shading applied to the skin geometry.

Note

Currently only Displace Along Normal mode is supported.

Mode

Match Skin Shader (Only Supports Displace Along Normal)

Attempts to find the skin shader referenced by this groom and uses it’s displacement values.

This is done by stepping along each referenced groom object to find a Guide Groom which has a reference to skin geometry. The shader assigned to the geometry’s object is used to detect displacement values.

Match Specified Shader (Only Supports Displace Along Normal)

Use the displacement values of a specified reference shader.

Displace Along Normal

Manually set displacement values.

Reference Shader

The shader used by Match Specified Shader to find displacement parameter values.

Displacement Values

Texture

The manually specified displacement texture.

Offset

The manually specified displacement texture value offset.

Scale

The manually specified displacement scale.

Referenced Displacement Values

Skin Shader

Displays the detected skin shader when Mode is set to Match Skin Shader.

Texture

Displays the referenced texture when a shader is detected or assigned and when that shader has displacement set up and enabled.

Offset

Displays the referenced offset when a shader is detected or assigned and when that shader has displacement set up and enabled.

Scale

Displays the referenced scale when a shader is detected or assigned and when that shader has displacement set up and enabled.

Render

Hair Generation

Use SOP Geometry

Render the same SOP geometry as the viewport.

Generate Geometry in Mantra

Generate hair geometry in the renderer. This allows rendering the full hair while using optimizations like Limit to Bounding Box and Pruning in the viewport.

Note

Generating hair in the renderer is only supported with Mantra.

Rendering with 3rd party renderers is only possible when setting this parameter to Use SOP Geometry, which causes the full generated hair to be written prior to rendering.

Render Visibility

Controls the visibility of an object to different types of rays using a category expression. This parameter generalizes the Phantom and Renderable toggles and allows more specific control over the visibility of an object to the different ray types supported by mantra and VEX.

  • “primary” - Rays sent from the camera

  • “shadow” - Shadow rays

  • “diffuse” - Diffuse rays

  • “reflect” - Reflections

  • “refract” - Refractions

For example, to create a phantom object, set the expression to “-primary”. To create an unrenderable object, set the expression to the empty string “”. These tokens correspond to the string given to “raystyle” in the VEX trace() and gather() functions.

Polygons as subdivision (Mantra)

Render polygons as a subdivision surface. The creaseweight attribute is used to perform linear creasing. This attribute may appear on points, vertices or primitives.

When rendering using OpenSubdiv, in addition to the creaseweight, cornerwieght attributes and the subdivision_hole group, additional attributes are scanned to control the behaviour of refinement. These override any other settings:

  • int osd_scheme, string osd_scheme: Specifies the scheme for OSD subdivision (0 or “catmull-clark”; 1 or “loop”; 2 or “bilinear”). Note that for Loop subdivision, the geometry can only contain triangles.

  • int osd_vtxboundaryinterpolation: The Vertex Boundary Interpolation method (see vm_osd_vtxinterp for further details)

  • int osd_fvarlinearinterpolation: The Face-Varying Linear Interpolation method (see vm_osd_fvarinterp for further details)

  • int osd_creasingmethod: Specify the creasing method, 0 for Catmull-Clark, 1 for Chaikin

  • int osd_trianglesubdiv: Specifies the triangle weighting algorithm, 0 for Catmull-Clark weights, 1 for “smooth triangle” weights.

Render

Material

Path to the Material node.

Display

Whether or not this object is displayed in the viewport and rendered. Turn on the checkbox to have Houdini use this parameter, then set the value to 0 to hide the object in the viewport and not render it, or 1 to show and render the object. If the checkbox is off, Houdini ignores the value.

Phantom

When true, the object will not be rendered by primary rays. Only secondary rays will hit the object.

(See the Render Visibility property).

Renderable

If this option is turned off, then the instance will not be rendered. The object’s properties can still be queried from within VEX, but no geometry will be rendered. This is roughly equivalent to turning the object into a transform space object (the object would only be available as a transform for VEX functions, but have no geometry).

See Render Visibility (vm_rendervisibility property).

Display As

How to display your geometry in the viewport.

Polygons as subdivision (Mantra)

Render polygons as a subdivision surface. The creaseweight attribute is used to perform linear creasing. This attribute may appear on points, vertices or primitives.

When rendering using OpenSubdiv, in addition to the creaseweight, cornerwieght attributes and the subdivision_hole group, additional attributes are scanned to control the behaviour of refinement. These override any other settings:

  • int osd_scheme, string osd_scheme: Specifies the scheme for OSD subdivision (0 or “catmull-clark”; 1 or “loop”; 2 or “bilinear”). Note that for Loop subdivision, the geometry can only contain triangles.

  • int osd_vtxboundaryinterpolation: The Vertex Boundary Interpolation method (see vm_osd_vtxinterp for further details)

  • int osd_fvarlinearinterpolation: The Face-Varying Linear Interpolation method (see vm_osd_fvarinterp for further details)

  • int osd_creasingmethod: Specify the creasing method, 0 for Catmull-Clark, 1 for Chaikin

  • int osd_trianglesubdiv: Specifies the triangle weighting algorithm, 0 for Catmull-Clark weights, 1 for “smooth triangle” weights.

Shading

Categories

The space or comma separated list of categories to which this object belongs.

Currently not supported for per-primitive material assignment (material SOP).

Reflection mask

A list of patterns. Objects matching these patterns will reflect in this object. You can use wildcards (for example, key_*) and bundle references to specify objects.

You can also use the link editor pane to edit the relationships between lights and objects using a graphical interface.

The object:reflectmask property in Mantra is a computed property containing the results of combining reflection categories and reflection masks.

Refraction mask

A list of patterns. Objects matching these patterns will be visible in refraction rays. You can use wildcards (for example, key_*) and bundle references to specify objects.

You can also use the link editor pane to edit the relationships between lights and objects using a graphical interface.

The object:refractmask property in Mantra is a computed property containing the results of combining reflection categories and reflection masks.

Light mask

A list of patterns. Lights matching these patterns will illuminate this object. You can use wildcards (for example, key_*) and bundle references to specify lights.

You can also use the link editor pane to edit the relationships between lights and objects using a graphical interface.

The object:lightmask property in Mantra is a computed property containing the results of combining light categories and light masks.

Light selection

A space-separated list of categories. Lights in these categories will illuminate this object.

Volume filter

Some volume primitives (Geometry Volumes, Image3D) can use a filter during evaluation of volume channels. This specifies the filter. The default box filter is fast to evaluate and produces sharp renders for most smooth fluid simulations. If your voxel data contains aliasing (stairstepping along edges), you may need to use a larger filter width or smoother filter to produce acceptable results. For aliased volume data, gauss is a good filter with a filter width of 1.5.

  • point

  • box

  • gauss

  • bartlett

  • blackman

  • catrom

  • hanning

  • mitchell

Volume filter width

This specifies the filter width for the object:filter property. The filter width is specified in number of voxels. Larger filter widths take longer to render and produce blurrier renders, but may be necessary to combat aliasing in some kinds of voxel data.

Matte shading

When enabled, the object’s surface shader will be replaced with a matte shader for primary rays. The default matte shader causes the object to render as fully opaque but with an alpha of 0 - effectively cutting a hole in the image where the object would have appeared. This setting is useful when manually splitting an image into passes, so that the background elements can be rendered separately from a foreground object. The default matte shader is the “Matte” VEX shader, though it is possible to set a different matte shader by adding the vm_matteshader render property and assigning another shader. Secondary rays will still use the object’s assigned surface shader, allowing it to appear in reflections and indirect lighting even though it will not render directly.

For correct matte shading of volumes:

  1. Add the vm_matteshader property to the object.

  2. Create a Volume Matte shader.

  3. Set the density on this shader to match the density on the geometry shader.

  4. Assign this shader to vm_matteshader.

Then when the Matte Shading toggle is enabled, it will use your custom volume matte shader rather than the default (which just sets the density to 1). If you want fully opaque matte, you can use the matte shader rather than volume matte.

Raytrace shading

Shade every sample rather than shading micropolygon vertices. This setting enables the raytrace rendering on a per-object basis.

When micro-polygon rendering, shading normally occurs at micro-polygon vertices at the beginning of the frame. To determine the color of a sample, the corner vertices are interpolated. Turning on object:rayshade will cause the ray-tracing shading algorithm to be invoked. This will cause each sample to be shaded independently. This means that the shading cost may be significantly increased. However, each sample will be shaded at the correct time, and location.

Currently not supported for per-primitive material assignment (material SOP).

Sampling

Geometry velocity blur

This menu lets you choose what type of geometry velocity blur to do on an object, if any. Separate from transform blur and deformation blur, you can render motion blur based on point movement, using attributes stored on the points that record change over time. You should use this type of blur if the number points in the geometry changes over time (for example, a particle simulation where points are born and die).

If your geometry changes topology frame-to-frame, Mantra will not be able to interpolate the geometry to correctly calculate Motion Blur. In these cases, motion blur can use a v and/or accel attribute which is consistent even while the underlying geometry is changing. The surface of a fluid simulation is a good example of this. In this case, and other types of simulation data, the solvers will automatically create the velocity attribute.

No Velocity Blur

Do not render motion blur on this object, even if the renderer is set to allow motion blur.

Velocity Blur

To use velocity blur, you must compute and store point velocities in a point attribute v. The renderer uses this attribute, if it exists, to render velocity motion blur (assuming the renderer is set to allow motion blur). The v attribute may be created automatically by simulation nodes (such as particle DOPs), or you can compute and add it using the Point velocity SOP.

The v attribute value is measured in Houdini units per second.

Acceleration Blur

To use acceleration blur, you must compute and store point acceleration in a point attribute accel (you can change the acceleration attribute name using the geo_accelattribute property). The renderer uses this attribute, if it exists, to render multi-segment acceleration motion blur (assuming the renderer is set to allow motion blur). The accel attribute may be created automatically by simulation nodes, or you can compute and add it using the Point velocity SOP.

When Acceleration Blur is on, if the geometry has a angular velocity attribute (w), rapid rotation will also be blurred. This should be a vector attribute, where the components represent rotation speeds in radians per second around X, Y, and Z.

When this is set to “Velocity Blur” or “Acceleration Blur”, deformation blur is not applied to the object. When this is set to “Acceleration Blur”, you can use the geo_motionsamples property to set the number of acceleration samples.

Velocity motion blur used the velocity attribute (v) to do linear motion blur.
Acceleration motion blur uses the change in velocity to more accurately blue objects turning at high speed.
Angular acceleration blur works with object spin, such as these fast-spinning cubes.

Dicing

Shading quality

This parameter controls the geometric subdivision resolution for all rendering engines and additionally controls the shading resolution for micropolygon rendering. With all other parameters at their defaults, a value of 1 means that approximately 1 micropolygon will be created per pixel. A higher value will generate smaller micropolygons meaning that more shading will occur - but the quality will be higher.

In ray tracing engines, shading quality only affects the geometric subdivision quality for smooth surfaces (NURBS, render as subdivision) and for displacements - without changing the amount of surface shading. When using ray tracing, pixel samples and ray sampling parameters must be used to improve surface shading quality.

The effect of changing the shading quality is to increase or decrease the amount of shading by a factor of vm_shadingquality squared - so a shading quality of 2 will perform 4 times as much shading and a shading quality of 0.5 will perform 1/4 times as much shading.

Dicing flatness

This property controls the tesselation levels for nearly flat primitives. By increasing the value, more primitives will be considered flat and will be sub-divided less. Turn this option down for more accurate (less optimized) nearly-flat surfaces.

Ray predicing

This property will cause this object to generate all displaced and subdivided geometry before the render begins. Ray tracing can be significantly faster when this setting is enabled at the cost of potentially huge memory requirements.

Disable Predicing

Geometry is diced when it is hit by a ray.

Full Predicing

Generate and store all diced geometry at once.

Precompute Bounds

Generate all diced geometry just to compute accurate bounding boxes. This setting will discard the diced geometry as soon as the box has been computed, so it is very memory efficient. This can be useful to improve efficiency when using displacements with a large displacement bound without incurring the memory cost of full predicing.

When ray-tracing, if all polygons on the model are visible (either to primary or secondary rays) it can be more efficient to pre-dice all the geometry in that model rather than caching portions of the geometry and re-generating the geometry on the fly. This is especially true when global illumination is being computed (since there is less coherency among rays).

Currently not supported for per-primitive material assignment (material SOP).

Shade curves as surfaces

When rendering a curve, turns the curve into a surface and dices the surface, running the surface shader on multiple points across the surface. This may be useful when the curves become curved surfaces, but is less efficient. The default is to simply run the shader on the points of the curve and duplicate those shaded points across the created surface.

Geometry

Backface removal (Mantra)

If enabled, geometry that are facing away from the camera are not rendered.

Procedural shader

Geometry SHOP used by the renderer to generate render geometry for this object.

Force procedural geometry output

Enables output of geometry when a procedural shader is assigned. If you know that the procedural you have assigned does not rely on geometry being present for the procedural to operate correctly, you can disable this toggle.

Polygons as subdivision (Mantra)

Render polygons as a subdivision surface. The creaseweight attribute is used to perform linear creasing. This attribute may appear on points, vertices or primitives.

When rendering using OpenSubdiv, in addition to the creaseweight, cornerwieght attributes and the subdivision_hole group, additional attributes are scanned to control the behaviour of refinement. These override any other settings:

  • int osd_scheme, string osd_scheme: Specifies the scheme for OSD subdivision (0 or “catmull-clark”; 1 or “loop”; 2 or “bilinear”). Note that for Loop subdivision, the geometry can only contain triangles.

  • int osd_vtxboundaryinterpolation: The Vertex Boundary Interpolation method (see vm_osd_vtxinterp for further details)

  • int osd_fvarlinearinterpolation: The Face-Varying Linear Interpolation method (see vm_osd_fvarinterp for further details)

  • int osd_creasingmethod: Specify the creasing method, 0 for Catmull-Clark, 1 for Chaikin

  • int osd_trianglesubdiv: Specifies the triangle weighting algorithm, 0 for Catmull-Clark weights, 1 for “smooth triangle” weights.

Render as points (Mantra)

Controls how points from geometry are rendered. At the default settings, No Point Rendering, only points from particle systems are rendered. Setting this value to Render Only Points, will render the geometry using only the point attributes, ignoring all vertex and primitive information. Render Unconnected Points works in a similar way, but only for points not used by any of the geometry’s primitives.

Two attributes control the point primitives if they exist.

orient

A vector which determines the normal of the point geometry. If the attribute doesn’t exist, points are oriented to face the incoming ray (the VEX I variable).

width

Determines the 3D size of the points (defaults to 0.05).

Use N for point rendering

Mantra will initialize the N global from the N attribute when rendering point primitives. When disabled (the default), point normals will be initialized to face the camera.

Metaballs as volume

Render metaballs as volumes as opposed to surfaces. The volume quality for metaballs will be set based on the average size of all metaballs in the geometry, so increasing or decreasing the metaball size will automatically adjust the render quality to match.

Coving

Whether Mantra will try to prevent cracks.

Coving is the process of filling cracks in diced geometry at render time, where different levels of dicing side-by-side create gaps at T-junctions.

The default setting, Coving for displacement/sub-d, only does coving for surfaces with a displacement shader and subdivision surfaces, where the displacement of points can potentially create large cracks. This is sufficient for more rendering, however you may want to use Coving for all primitives if you are using a very low shading rate or see cracks in the alpha of the rendered image.

Do not use Disable coving. It has no performance benefit, and may actually harm performance since Houdini has to render any geometry visible through the crack.

0

No coving.

1

Only displaced surfaces and sub-division surfaces will be coved.

2

All primitives will be coved.

Material Override

Controls how material overrides are evaluated and output to the IFD.

When set to Evaluate Once, any parameter on the material, that uses channels or expressions, will be evaluated only once for the entire detail. This results in significantly faster IFD generation, due to the material parameter assignment being handled entirely by Mantra, rather than Houdini. Setting the parameter value to Evaluate for Each Primitive/Point will evaluate those parameters for each primitive and/or point. It’s also possible to skip material overrides entirely by setting the parameter value to Disabled.

Automatically Compute Normals (Old)

Whether mantra should compute the N attribute automatically. If the N attribute exists, the value will remain unchanged. However, if no N attribute exists, it will be created. This allows polygon geometry which doesn’t have the N attribute already computed to be smooth shaded.

Not supported for per-primitive material assignment (material SOP).

Ignore geometry attribute shaders

When geometry has shaders defined on a per-primitive basis, this parameter will override these shaders and use only the object’s shader. This is useful when performing matte shading on objects.

Not supported for per-primitive material assignment (material SOP).

Locals

See also

Object nodes

  • Agent Cam

    Create and attach camera to a crowd agent.

  • Alembic Archive

    Loads the objects from an Alembic scene archive (.abc) file into the object level.

  • Alembic Xform

    Loads only the transform from an object or objects in an Alembic scene archive (.abc).

  • Ambient Light

    Adds a constant level of light to every surface in the scene (or in the light’s mask), coming from no specific direction.

  • Auto Bone Chain Interface

    The Auto Bone Chain Interface is created by the IK from Objects and IK from Bones tools on the Rigging shelf.

  • Blend

    Switches or blends between the transformations of several input objects.

  • Blend Sticky

    Computes its transform by blending between the transforms of two or more sticky objects, allowing you to blend a position across a polygonal surface.

  • Bone

    The Bone Object is used to create hierarchies of limb-like objects that form part of a hierarchy …

  • Camera

    You can view your scene through a camera, and render from its point of view.

  • Common object parameters

  • Dop Network

    The DOP Network Object contains a dynamic simulation.

  • Environment Light

    Environment Lights provide background illumination from outside the scene.

  • Extract Transform

    The Extract Transform Object gets its transform by comparing the points of two pieces of geometry.

  • Fetch

    The Fetch Object gets its transform by copying the transform of another object.

  • Formation Crowd Example

    Crowd example showing a changing formation setup

  • Fuzzy Logic Obstacle Avoidance Example

  • Fuzzy Logic State Transition Example

  • Geometry

    Container for the geometry operators (SOPs) that define a modeled object.

  • Groom Merge

    Merges groom data from multiple objects into one.

  • Guide Deform

    Moves the curves of a groom with animated skin.

  • Guide Groom

    Generates guide curves from a skin geometry and does further processing on these using an editable SOP network contained within the node.

  • Guide Simulate

    Runs a physics simulation on the input guides.

  • Hair Card Generate

    Converts dense hair curves to a polygon card, keeping the style and shape of the groom.

  • Hair Card Texture Example

    An example of how to create a texture for hair cards.

  • Hair Generate

    Generates hair from a skin geometry and guide curves.

  • Handle

    The Handle Object is an IK tool for manipulating bones.

  • Indirect Light

    Indirect lights produce illumination that has reflected from other objects in the scene.

  • Instance

    Instance Objects can instance other geometry, light, or even subnetworks of objects.

  • LOP Import

    Imports transform data from a USD primitive in a LOP node.

  • LOP Import Camera

    Imports a USD camera primitive from a LOP node.

  • Labs Fire Presets

    Quickly generate and render fire simulations using presets for size varying from torch to small to 1m high and low

  • Light

    Light Objects cast light on other objects in a scene.

  • Light template

    A very limited light object without any built-in render properties. Use this only if you want to build completely custom light with your choice of properties.

  • Microphone

    The Microphone object specifies a listening point for the SpatialAudio CHOP.

  • Mocap Acclaim

    Import Acclaim motion capture.

  • Mocap Biped 1

    A male character with motion captured animations.

  • Mocap Biped 2

    A male character with motion captured animations.

  • Mocap Biped 3

    A male character with motion captured animations.

  • Null

    Serves as a place-holder in the scene, usually for parenting. this object does not render.

  • Path

    The Path object creates an oriented curve (path)

  • PathCV

    The PathCV object creates control vertices used by the Path object.

  • Python Script

    The Python Script object is a container for the geometry operators (SOPs) that define a modeled object.

  • Ragdoll Run Example

    Crowd example showing a simple ragdoll setup.

  • Reference Image

    Container for the Compositing operators (COP2) that define a picture.

  • Rivet

    Creates a rivet on an objects surface, usually for parenting.

  • Simple Biped

    A simple and efficient animation rig with full controls.

  • Simple Female

    A simple and efficient female character animation rig with full controls.

  • Simple Male

    A simple and efficient male character animation rig with full controls.

  • Sound

    The Sound object defines a sound emission point for the Spatial Audio chop.

  • Stadium Crowds Example

    Crowd example showing a stadium setup

  • Stereo Camera Rig

    Provides parameters to manipulate the interaxial lens distance as well as the zero parallax setting plane in the scene.

  • Stereo Camera Template

    Serves as a basis for constructing a more functional stereo camera rig as a digital asset.

  • Sticky

    Creates a sticky object based on the UV’s of a surface, usually for parenting.

  • Street Crowd Example

    Crowd example showing a street setup with two agent groups

  • Subnet

    Container for objects.

  • Switcher

    Acts as a camera but switches between the views from other cameras.

  • TOP Network

    The TOP Network operator contains object-level nodes for running tasks.

  • VR Camera

    Camera supporting VR image rendering.

  • Viewport Isolator

    A Python Script HDA providing per viewport isolation controls from selection.

  • glTF