HDK
|
A Deep Camera image is an image which stores values for each color channel at multiple depths for every pixel. A deep shadow image is a special case of a deep camera image which contains a single channel for opacity.
The HDK provides classes to read/write and evaluate Deep Camera Maps (and Deep Shadow Maps):
The IMG_DeepShadow class can be used in conjunction with the IMG_DeepPixelReader class to read raw pixel values from deep shadow images. See standalone/dsmprint.C for a simple example.
To evaluate a deep image at a given z-depth (with interpolation and filtering), please see TIL_TextureMap::deepLookup().
The example file standalone/i3ddsmgen.C reads a 3D texture file and converts it to a deep shadow image.
The writing process is fairly straight-forward:
When writing pixels, a single interleaved array of floats needs to be passed down for each z depth record. The first 3 floats of the buffer must be the opacity (red, green, blue) values for that depth. The buffer should have values for each extra channel referenced. For example:
When writing options for camera transforms, please see IMG_DeepShadow.h for details on the options expected.
By default, mantra will store the information required to reconstruct the world-to-view and world-to-NDC transformation matrices in the TBFOptions stored on the map.
There are convenience functions (new in H12) which allow you to extract these matrices if they are available.
The world to NDC transform will return points in "homogeneous" space. To get the values in unit space (i.e. 0, 1), you will need to convert the point after transforming it.
To convert the z-coordinate, you will have to import the "camera:clip" option from the TBFOptions. This is a 2-tuple storing the near/far clipping planes.