Posted May 16, 2013
Share

Halo 4 Spartan Ops is an immersive animated series brought out in support of the Halo 4 in-game storyline. Designed to connect you with some of the game’s key characters, this series becomes an extension of the gaming experience. 

To bring this series to life, Axis worked on all 1200 beautifully rendered CG shots. At Axis, Houdini is a core production tool that they use for a wide variety of lighting and VFX work. Houdini’s Mantra renderer has become a key tool for Axis and Halo 4 Spartan Ops is just one of their many projects rendered in Mantra where unbelievable realism was the goal. 

In this interview with Axis’s CG Supervisor Sergio Caires and Pipeline Supervisor Nicholas Pliatsikas we discuss how Houdini and Mantra were used to help generate the final look, creative customizations, and Axis’ facial rigging system for Halo 4 Spartan Ops.

Interview With Sergio Caires And Nicholas Pliatsikas

SideFX: Is it true that Mantra has become your ‘go-to’ renderer for all of your projects?

Sergio Caires: Yes it is and has been for a long time. You would now struggle to find a project on our website that wasn't rendered in Mantra. 

SideFX: Halo 4 Spartan Ops was surely a data intensive project with its incredible realism. How well did Mantra perform when handling large volumes of data?

Sergio Caires: We have not come across any renderer (and we have been through a few big ones) that can handle anything like the amount of data that we push through Mantra so effortlessly - and then have the audacity to raytrace against it. Its reliability and robustness gives us supreme confidence in its ability. 

In Halo 4, there is a model of a spacecraft called the Infinity that was created by Ansel Hsiao, a spaceship modeling god. He asked us the question "what is your polygon budget?" and I replied with words to the effect of "go nuts on it". He built us an amazingly detailed looking model with around 26 million polygons that we completed with Mantra at an amazing success rate!

Nicholas Pliatsikas: The flexibility that Houdini gave us was paramount because we had a huge amount of assets in place that created large volumes of file I/O. For Spartan Ops, we rebuilt our asset system in Houdini to make it as automated as possible.

Our asset system was able to generate the Houdini assets automatically, along with the shader templates all hooked together with Houdini’s Digital Assets. These assets were built with user-editable artist areas allowing for procedural modification for both pre and post deformation.

The integration of our own custom render time delayed load procedural, which loads cache data directly from our animation software Maya, was completely seamless. Houdini is open enough that we could even use CVEX to modify our geometry cache data through the delayed load at render time. The latest geo core rebuild in H12 really was perfect timing for us to be able to handle such massive amounts of data. The new instancing system was of the utmost importance to us when rendering a highly detailed armoury in Spartan Ops Episode 1, in the shot where the Spartans get suited up - those Gyro Bays were very complex objects.

SideFX: The hair on the Spartan characters looks fantastic, and like everything else, incredibly real. Can you tell us a little more about the process involved generating this hair?

Sergio Caires: The buzz cuts were a creative choice motivated by the military theme of the series. For longer hair, we have continued to use and develop the system that we previously used in our Dead Island trailer. This is an extensively modified version of the already flexible system provided out-of-the-box in Houdini that contains many enhancements such as curling, noises, and most importantly a new guide following method which is much better suited to controlling long styled human hair. This was all done by editing existing CVEX node based shaders!

As in Dead Island, we again lit the Hair using volumetric shadow casting proxies, which provided a good representation of the hair shape and density. First, we resampled the hair preview curves with a segment length tied to the volume voxel size, then using a Volume VEX Operator (VOP) and point cloud VOPS, we simply count the number of points within each voxel thus giving us a reasonably accurate representation of the hair density as well as shape. The resulting volume is then used to cast density attenuated shadows onto the hair and rest of the scene. We developed tools to automatically generate these volumes and fully integrated them into the Axis Fur UI, freeing up artist resources for other tasks.

To shade hair, make it look more realistic, and dramatically speed up render times, we used volumes.This gave our character’s hair a natural softness and permeability to light, which goes some way to mimic the light scattering and attenuation as it bounces around and through the hair.

One of the best production friendly modifications we have made is the ability to import hair guides from anywhere or from any software that can output NURBS curves or polygon lines. This is absolutely awesome because we can then outsource the hair styling (in our case to a specialist working in Max) and procedurally deal with it in whatever shape that delivery may come to us in.

SideFX: Can you tell us a bit more about your rendering and lighting pipeline?

Sergio Caires: We assetize everything in Houdini, so everything gets easily distributed and updated across the project. Each asset is accompanied by a shader asset, this contains surface, displacement, and property shading operators that are wired up to "output shader" nodes which, in turn, get assigned to the material slots. These output shaders are automatically generated based on our asset definition; this speeds up shader assignment workflow since all the conventions are already defined.

We use output shader nodes partly because it allows us to share shaders across materials. This method gives us the most flexibility in dealing with production issues; for example, it may be that a model is split differently for surface shader than it is for displacement so we often plug just one displacement and property SHOP into multiple output shader nodes. We don't see an advantage to boxing in UberShader based surface and displacement shaders within another "material" subnet. That only comes into play for custom shaders which may benefit from sharing VOP’s etc, but since 99.99% of our work is done with our custom UberShader this is not a frequent scenario for us.

SideFX: Could you tell us more about any customizations you may have made to Mantra in order to create the look achieved in this spot?

Sergio Caires: I wouldn't say we have customized Mantra, but we have customized almost everything that feeds into it. This is one of the great things about Houdini. Side Effects has done the really hard work of designing software so impressively flexible and well thought out that it can easily be customized by anyone to suit their particular needs.

There are a couple things of note that we did to help us optimise our renders. We have our own irradiance caching methodology, so that we can do indirect light lookups in a similar way to how Houdini’s photon maps work but with an evenly distributed point cloud instead.

These were a pain to generate but we have now figured out how to generate it using the same mechanisms and hooks that Mantra uses to generate nice evenly distributed point clouds with baked irradiance for its Sub-Surface Scattering shader. You have to try really hard to find a black box in Houdini that you can't open to tweak or re-purpose!

We also created an invaluable little surface operator (SOP) that lets you assign property nodes to polygons outside the camera frustum so that we can very aggressively take down dicing quality in unseen parts of the scene.

We have a really neat procedural shader that lets you add things like dirt in crevices, dripping from overhangs, and wear and tear to edges like chipped paint and scratches. This was used pretty much on everything from the UNSC Infinity ship to the Spartan’s armour.

Nicholas Pliatsikas: The main customization we did for Mantra was the introduction of our custom Delayed Load Procedural. This gave us quite a few features that really helped, mainly to remove the bottleneck of another geometry caching stage.

I suppose the setup we had with the automatic Houdini Digital Asset system is similar to the current alembic workflow in Houdini, but on steroids. We wanted to push this further and have the animation data from Maya actually rendered directly in Mantra. This comes with a few requirements, such as the ability to do certain edits to the source geometry post deformation on the delayed load itself.

With the ability to embed the point order as attribute within the Houdini GEO format, we could then alter the point order/ delete faces etc and our delayed load would still apply the correct deformation to the geometry and not scramble the mesh as it would in other applications.

We also found that with the scales and distances we had to work at, we hit certain floating-point calculation problems. We solved this by creating an offset system: where we moved all data to the origin centralised around the camera itself. This had to happen on component space, due to world space caches being utilised, So again we were able to use the custom delayed load with the same offset data as the rest of the scene based off the new camera position at origin.

SideFX: We understand that you implemented similar facial rigging techniques as you did when you were producing the Dead Island trailer. Did you use the exact same system as you used before or did you make any advancements?

Nicholas Pliatsikas: We made some improvements with the, the organic feel of the motions and overall shapes.

The overall facial pipeline worked really well to be honest, from the scan data for the base mesh and the 4d scans for the blend shape extraction. Again we utilized Houdini to extract the blend shapes used to drive our proprietary joint based face rigs. We had a mixed pipe of 4d captured head scans for our own assets and existing 343 Industries asset data for certain characters, which required a different treatment due to differing topologies. Houdini was used to re-target these to our standard topological structure to allow for blend shape extraction as well.

Related to the this we also attempted to push the quality further with displacement maps extracted per blend shape from the raw 4d scan data but the resolution wasn’t quite there and there was an inherent issue with alignments between displacements so the results were too messy to use.

This is a shame, as it would have made for a nice automated set-up that could have increased our overall quality on a large-scale production. We did fallback to our trusty stress detection assets and wrinkle map method applied to all hero characters, which really does add to that organic skin effect that the shading does so well to highlight.

SideFX: Axis also uses Houdini for everyday VFX work such as volumetrics, smoke, fire, and explosions. How is the experience when you integrate these kinds of effects into a shot?

Sergio Caires: Integrating these sorts of VFX elements into a shot is very easy with Houdini for the reason that everything, that is the VFX and the rendering is handled within the same package. When working with volumes for instance, because they are now rendered so quickly with Mantra, we can just render them out using the very same light rig as we had used for the rest of the scene. In other words, there is no integration step because it’s all integrated from the outset.


COMMENTS

  • There are currently no comments

Please log in to leave a comment.