HDK
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
NanoVDB.h
Go to the documentation of this file.
1 // Copyright Contributors to the OpenVDB Project
2 // SPDX-License-Identifier: MPL-2.0
3 
4 /*!
5  \file NanoVDB.h
6 
7  \author Ken Museth
8 
9  \date January 8, 2020
10 
11  \brief Implements a light-weight self-contained VDB data-structure in a
12  single file! In other words, this is a significantly watered-down
13  version of the OpenVDB implementation, with few dependencies - so
14  a one-stop-shop for a minimalistic VDB data structure that run on
15  most platforms!
16 
17  \note It is important to note that NanoVDB (by design) is a read-only
18  sparse GPU (and CPU) friendly data structure intended for applications
19  like rendering and collision detection. As such it obviously lacks
20  a lot of the functionality and features of OpenVDB grids. NanoVDB
21  is essentially a compact linearized (or serialized) representation of
22  an OpenVDB tree with getValue methods only. For best performance use
23  the ReadAccessor::getValue method as opposed to the Tree::getValue
24  method. Note that since a ReadAccessor caches previous access patterns
25  it is by design not thread-safe, so use one instantiation per thread
26  (it is very light-weight). Also, it is not safe to copy accessors between
27  the GPU and CPU! In fact, client code should only interface
28  with the API of the Grid class (all other nodes of the NanoVDB data
29  structure can safely be ignored by most client codes)!
30 
31 
32  \warning NanoVDB grids can only be constructed via tools like createNanoGrid
33  or the GridBuilder. This explains why none of the grid nodes defined below
34  have public constructors or destructors.
35 
36  \details Please see the following paper for more details on the data structure:
37  K. Museth, “VDB: High-Resolution Sparse Volumes with Dynamic Topology”,
38  ACM Transactions on Graphics 32(3), 2013, which can be found here:
39  http://www.museth.org/Ken/Publications_files/Museth_TOG13.pdf
40 
41  NanoVDB was first published there: https://dl.acm.org/doi/fullHtml/10.1145/3450623.3464653
42 
43 
44  Overview: This file implements the following fundamental class that when combined
45  forms the backbone of the VDB tree data structure:
46 
47  Coord- a signed integer coordinate
48  Vec3 - a 3D vector
49  Vec4 - a 4D vector
50  BBox - a bounding box
51  Mask - a bitmask essential to the non-root tree nodes
52  Map - an affine coordinate transformation
53  Grid - contains a Tree and a map for world<->index transformations. Use
54  this class as the main API with client code!
55  Tree - contains a RootNode and getValue methods that should only be used for debugging
56  RootNode - the top-level node of the VDB data structure
57  InternalNode - the internal nodes of the VDB data structure
58  LeafNode - the lowest level tree nodes that encode voxel values and state
59  ReadAccessor - implements accelerated random access operations
60 
61  Semantics: A VDB data structure encodes values and (binary) states associated with
62  signed integer coordinates. Values encoded at the leaf node level are
63  denoted voxel values, and values associated with other tree nodes are referred
64  to as tile values, which by design cover a larger coordinate index domain.
65 
66 
67  Memory layout:
68 
69  It's important to emphasize that all the grid data (defined below) are explicitly 32 byte
70  aligned, which implies that any memory buffer that contains a NanoVDB grid must also be at
71  32 byte aligned. That is, the memory address of the beginning of a buffer (see ascii diagram below)
72  must be divisible by 32, i.e. uintptr_t(&buffer)%32 == 0! If this is not the case, the C++ standard
73  says the behaviour is undefined! Normally this is not a concerns on GPUs, because they use 256 byte
74  aligned allocations, but the same cannot be said about the CPU.
75 
76  GridData is always at the very beginning of the buffer immediately followed by TreeData!
77  The remaining nodes and blind-data are allowed to be scattered throughout the buffer,
78  though in practice they are arranged as:
79 
80  GridData: 672 bytes (e.g. magic, checksum, major, flags, index, count, size, name, map, world bbox, voxel size, class, type, offset, count)
81 
82  TreeData: 64 bytes (node counts and byte offsets)
83 
84  ... optional padding ...
85 
86  RootData: size depends on ValueType (index bbox, voxel count, tile count, min/max/avg/standard deviation)
87 
88  Array of: RootData::Tile
89 
90  ... optional padding ...
91 
92  Array of: Upper InternalNodes of size 32^3: bbox, two bit masks, 32768 tile values, and min/max/avg/standard deviation values
93 
94  ... optional padding ...
95 
96  Array of: Lower InternalNodes of size 16^3: bbox, two bit masks, 4096 tile values, and min/max/avg/standard deviation values
97 
98  ... optional padding ...
99 
100  Array of: LeafNodes of size 8^3: bbox, bit masks, 512 voxel values, and min/max/avg/standard deviation values
101 
102 
103  Notation: "]---[" implies it has optional padding, and "][" implies zero padding
104 
105  [GridData(672B)][TreeData(64B)]---[RootData][N x Root::Tile]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
106  ^ ^ ^ ^ ^ ^
107  | | | | | |
108  +-- Start of 32B aligned buffer | | | | +-- Node0::DataType* leafData
109  GridType::DataType* gridData | | | |
110  | | | +-- Node1::DataType* lowerData
111  RootType::DataType* rootData --+ | |
112  | +-- Node2::DataType* upperData
113  |
114  +-- RootType::DataType::Tile* tile
115 
116 */
117 
118 #ifndef NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
119 #define NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
120 
121 // NANOVDB_MAGIC_NUMBER is currently used for both grids and files (starting with v32.6.0)
122 // NANOVDB_MAGIC_GRID will soon be used exclusively for grids
123 // NANOVDB_MAGIC_FILE will soon be used exclusively for files
124 // NANOVDB_MAGIC_NODE will soon be used exclusively for NodeManager
125 // | : 0 in 30 corresponds to 0 in NanoVDB0
126 #define NANOVDB_MAGIC_NUMBER 0x304244566f6e614eUL // "NanoVDB0" in hex - little endian (uint64_t)
127 #define NANOVDB_MAGIC_GRID 0x314244566f6e614eUL // "NanoVDB1" in hex - little endian (uint64_t)
128 #define NANOVDB_MAGIC_FILE 0x324244566f6e614eUL // "NanoVDB2" in hex - little endian (uint64_t)
129 #define NANOVDB_MAGIC_NODE 0x334244566f6e614eUL // "NanoVDB3" in hex - little endian (uint64_t)
130 #define NANOVDB_MAGIC_MASK 0x00FFFFFFFFFFFFFFUL // use this mask to remove the number
131 //#define NANOVDB_USE_NEW_MAGIC_NUMBERS// used to enable use of the new magic numbers described above
132 
133 #define NANOVDB_MAJOR_VERSION_NUMBER 32 // reflects changes to the ABI and hence also the file format
134 #define NANOVDB_MINOR_VERSION_NUMBER 6 // reflects changes to the API but not ABI
135 #define NANOVDB_PATCH_VERSION_NUMBER 0 // reflects changes that does not affect the ABI or API
136 
137 #define TBB_SUPPRESS_DEPRECATED_MESSAGES 1
138 
139 // This replaces a Coord key at the root level with a single uint64_t
140 #define NANOVDB_USE_SINGLE_ROOT_KEY
141 
142 // This replaces three levels of Coord keys in the ReadAccessor with one Coord
143 //#define NANOVDB_USE_SINGLE_ACCESSOR_KEY
144 
145 // Use this to switch between std::ofstream or FILE implementations
146 //#define NANOVDB_USE_IOSTREAMS
147 
148 // Use this to switch between old and new accessor methods
149 #define NANOVDB_NEW_ACCESSOR_METHODS
150 
151 #define NANOVDB_FPN_BRANCHLESS
152 
153 // Do not change this value! 32 byte alignment is fixed in NanoVDB
154 #define NANOVDB_DATA_ALIGNMENT 32
155 
156 #if !defined(NANOVDB_ALIGN)
157 #define NANOVDB_ALIGN(n) alignas(n)
158 #endif // !defined(NANOVDB_ALIGN)
159 
160 #ifdef __CUDACC_RTC__
161 
162 typedef signed char int8_t;
163 typedef short int16_t;
164 typedef int int32_t;
165 typedef long long int64_t;
166 typedef unsigned char uint8_t;
167 typedef unsigned int uint32_t;
168 typedef unsigned short uint16_t;
169 typedef unsigned long long uint64_t;
170 
171 #define NANOVDB_ASSERT(x)
172 
173 #define UINT64_C(x) (x ## ULL)
174 
175 #else // !__CUDACC_RTC__
176 
177 #include <stdlib.h> // for abs in clang7
178 #include <stdint.h> // for types like int32_t etc
179 #include <stddef.h> // for size_t type
180 #include <cassert> // for assert
181 #include <cstdio> // for snprintf
182 #include <cmath> // for sqrt and fma
183 #include <limits> // for numeric_limits
184 #include <utility>// for std::move
185 #ifdef NANOVDB_USE_IOSTREAMS
186 #include <fstream>// for read/writeUncompressedGrids
187 #endif
188 // All asserts can be disabled here, even for debug builds
189 #if 1
190 #define NANOVDB_ASSERT(x) assert(x)
191 #else
192 #define NANOVDB_ASSERT(x)
193 #endif
194 
195 #if defined(NANOVDB_USE_INTRINSICS) && defined(_MSC_VER)
196 #include <intrin.h>
197 #pragma intrinsic(_BitScanReverse)
198 #pragma intrinsic(_BitScanForward)
199 #pragma intrinsic(_BitScanReverse64)
200 #pragma intrinsic(_BitScanForward64)
201 #endif
202 
203 #endif // __CUDACC_RTC__
204 
205 #if defined(__CUDACC__) || defined(__HIP__)
206 // Only define __hostdev__ when using NVIDIA CUDA or HIP compilers
207 #ifndef __hostdev__
208 #define __hostdev__ __host__ __device__ // Runs on the CPU and GPU, called from the CPU or the GPU
209 #endif
210 #else
211 // Dummy definitions of macros only defined by CUDA and HIP compilers
212 #ifndef __hostdev__
213 #define __hostdev__ // Runs on the CPU and GPU, called from the CPU or the GPU
214 #endif
215 #ifndef __global__
216 #define __global__ // Runs on the GPU, called from the CPU or the GPU
217 #endif
218 #ifndef __device__
219 #define __device__ // Runs on the GPU, called from the GPU
220 #endif
221 #ifndef __host__
222 #define __host__ // Runs on the CPU, called from the CPU
223 #endif
224 
225 #endif // if defined(__CUDACC__) || defined(__HIP__)
226 
227 // The following macro will suppress annoying warnings when nvcc
228 // compiles functions that call (host) intrinsics (which is perfectly valid)
229 #if defined(_MSC_VER) && defined(__CUDACC__)
230 #define NANOVDB_HOSTDEV_DISABLE_WARNING __pragma("hd_warning_disable")
231 #elif defined(__GNUC__) && defined(__CUDACC__)
232 #define NANOVDB_HOSTDEV_DISABLE_WARNING _Pragma("hd_warning_disable")
233 #else
234 #define NANOVDB_HOSTDEV_DISABLE_WARNING
235 #endif
236 
237 // Define compiler warnings that work with all compilers
238 //#if defined(_MSC_VER)
239 //#define NANO_WARNING(msg) _pragma("message" #msg)
240 //#else
241 //#define NANO_WARNING(msg) _Pragma("message" #msg)
242 //#endif
243 
244 // A portable implementation of offsetof - unfortunately it doesn't work with static_assert
245 #define NANOVDB_OFFSETOF(CLASS, MEMBER) ((int)(size_t)((char*)&((CLASS*)0)->MEMBER - (char*)0))
246 
247 namespace nanovdb {
248 
249 // --------------------------> Build types <------------------------------------
250 
251 /// @brief Dummy type for a voxel whose value equals an offset into an external value array
252 class ValueIndex{};
253 
254 /// @brief Dummy type for a voxel whose value equals an offset into an external value array of active values
255 class ValueOnIndex{};
256 
257 /// @brief Like @c ValueIndex but with a mutable mask
259 
260 /// @brief Like @c ValueOnIndex but with a mutable mask
262 
263 /// @brief Dummy type for a voxel whose value equals its binary active state
264 class ValueMask{};
265 
266 /// @brief Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
267 class Half{};
268 
269 /// @brief Dummy type for a 4bit quantization of float point values
270 class Fp4{};
271 
272 /// @brief Dummy type for a 8bit quantization of float point values
273 class Fp8{};
274 
275 /// @brief Dummy type for a 16bit quantization of float point values
276 class Fp16{};
277 
278 /// @brief Dummy type for a variable bit quantization of floating point values
279 class FpN{};
280 
281 /// @brief Dummy type for indexing points into voxels
282 class Point{};
283 
284 // --------------------------> GridType <------------------------------------
285 
286 /// @brief List of types that are currently supported by NanoVDB
287 ///
288 /// @note To expand on this list do:
289 /// 1) Add the new type between Unknown and End in the enum below
290 /// 2) Add the new type to OpenToNanoVDB::processGrid that maps OpenVDB types to GridType
291 /// 3) Verify that the ConvertTrait in NanoToOpenVDB.h works correctly with the new type
292 /// 4) Add the new type to mapToGridType (defined below) that maps NanoVDB types to GridType
293 /// 5) Add the new type to toStr (defined below)
294 enum class GridType : uint32_t { Unknown = 0, // unknown value type - should rarely be used
295  Float = 1, // single precision floating point value
296  Double = 2, // double precision floating point value
297  Int16 = 3, // half precision signed integer value
298  Int32 = 4, // single precision signed integer value
299  Int64 = 5, // double precision signed integer value
300  Vec3f = 6, // single precision floating 3D vector
301  Vec3d = 7, // double precision floating 3D vector
302  Mask = 8, // no value, just the active state
303  Half = 9, // half precision floating point value (placeholder for IEEE 754 Half)
304  UInt32 = 10, // single precision unsigned integer value
305  Boolean = 11, // boolean value, encoded in bit array
306  RGBA8 = 12, // RGBA packed into 32bit word in reverse-order, i.e. R is lowest byte.
307  Fp4 = 13, // 4bit quantization of floating point value
308  Fp8 = 14, // 8bit quantization of floating point value
309  Fp16 = 15, // 16bit quantization of floating point value
310  FpN = 16, // variable bit quantization of floating point value
311  Vec4f = 17, // single precision floating 4D vector
312  Vec4d = 18, // double precision floating 4D vector
313  Index = 19, // index into an external array of active and inactive values
314  OnIndex = 20, // index into an external array of active values
315  IndexMask = 21, // like Index but with a mutable mask
316  OnIndexMask = 22, // like OnIndex but with a mutable mask
317  PointIndex = 23, // voxels encode indices to co-located points
318  Vec3u8 = 24, // 8bit quantization of floating point 3D vector (only as blind data)
319  Vec3u16 = 25, // 16bit quantization of floating point 3D vector (only as blind data)
320  End = 26 }; // should never be used
321 
322 #ifndef __CUDACC_RTC__
323 /// @brief Maps a GridType to a c-string
324 /// @param gridType GridType to be mapped to a string
325 /// @return Retuns a c-string used to describe a GridType
326 inline const char* toStr(GridType gridType)
327 {
328  static const char* LUT[] = {"?", "float", "double", "int16", "int32", "int64", "Vec3f", "Vec3d", "Mask", "Half",
329  "uint32", "bool", "RGBA8", "Float4", "Float8", "Float16", "FloatN", "Vec4f", "Vec4d",
330  "Index", "OnIndex", "IndexMask", "OnIndexMask", "PointIndex", "Vec3u8", "Vec3u16", "End"};
331  static_assert(sizeof(LUT) / sizeof(char*) - 1 == int(GridType::End), "Unexpected size of LUT");
332  return LUT[static_cast<int>(gridType)];
333 }
334 #endif
335 
336 // --------------------------> GridClass <------------------------------------
337 
338 /// @brief Classes (superset of OpenVDB) that are currently supported by NanoVDB
339 enum class GridClass : uint32_t { Unknown = 0,
340  LevelSet = 1, // narrow band level set, e.g. SDF
341  FogVolume = 2, // fog volume, e.g. density
342  Staggered = 3, // staggered MAC grid, e.g. velocity
343  PointIndex = 4, // point index grid
344  PointData = 5, // point data grid
345  Topology = 6, // grid with active states only (no values)
346  VoxelVolume = 7, // volume of geometric cubes, e.g. colors cubes in Minecraft
347  IndexGrid = 8, // grid whose values are offsets, e.g. into an external array
348  TensorGrid = 9, // Index grid for indexing learnable tensor features
349  End = 10 };
350 
351 #ifndef __CUDACC_RTC__
352 /// @brief Retuns a c-string used to describe a GridClass
353 inline const char* toStr(GridClass gridClass)
354 {
355  static const char* LUT[] = {"?", "SDF", "FOG", "MAC", "PNTIDX", "PNTDAT", "TOPO", "VOX", "INDEX", "TENSOR", "END"};
356  static_assert(sizeof(LUT) / sizeof(char*) - 1 == int(GridClass::End), "Unexpected size of LUT");
357  return LUT[static_cast<int>(gridClass)];
358 }
359 #endif
360 
361 // --------------------------> GridFlags <------------------------------------
362 
363 /// @brief Grid flags which indicate what extra information is present in the grid buffer.
364 enum class GridFlags : uint32_t {
365  HasLongGridName = 1 << 0, // grid name is longer than 256 characters
366  HasBBox = 1 << 1, // nodes contain bounding-boxes of active values
367  HasMinMax = 1 << 2, // nodes contain min/max of active values
368  HasAverage = 1 << 3, // nodes contain averages of active values
369  HasStdDeviation = 1 << 4, // nodes contain standard deviations of active values
370  IsBreadthFirst = 1 << 5, // nodes are typically arranged breadth-first in memory
371  End = 1 << 6, // use End - 1 as a mask for the 5 lower bit flags
372 };
373 
374 #ifndef __CUDACC_RTC__
375 /// @brief Retuns a c-string used to describe a GridFlags
376 inline const char* toStr(GridFlags gridFlags)
377 {
378  static const char* LUT[] = {"has long grid name",
379  "has bbox",
380  "has min/max",
381  "has average",
382  "has standard deviation",
383  "is breadth-first",
384  "end"};
385  static_assert(1 << (sizeof(LUT) / sizeof(char*) - 1) == int(GridFlags::End), "Unexpected size of LUT");
386  return LUT[static_cast<int>(gridFlags)];
387 }
388 #endif
389 
390 // --------------------------> GridBlindData enums <------------------------------------
391 
392 /// @brief Blind-data Classes that are currently supported by NanoVDB
393 enum class GridBlindDataClass : uint32_t { Unknown = 0,
394  IndexArray = 1,
395  AttributeArray = 2,
396  GridName = 3,
397  ChannelArray = 4,
398  End = 5 };
399 
400 /// @brief Blind-data Semantics that are currently understood by NanoVDB
401 enum class GridBlindDataSemantic : uint32_t { Unknown = 0,
402  PointPosition = 1, // 3D coordinates in an unknown space
403  PointColor = 2,
404  PointNormal = 3,
405  PointRadius = 4,
406  PointVelocity = 5,
407  PointId = 6,
408  WorldCoords = 7, // 3D coordinates in world space, e.g. (0.056, 0.8, 1,8)
409  GridCoords = 8, // 3D coordinates in grid space, e.g. (1.2, 4.0, 5.7), aka index-space
410  VoxelCoords = 9, // 3D coordinates in voxel space, e.g. (0.2, 0.0, 0.7)
411  End = 10 };
412 
413 // --------------------------> is_same <------------------------------------
414 
415 /// @brief C++11 implementation of std::is_same
416 /// @note When more than two arguments are provided value = T0==T1 || T0==T2 || ...
417 template<typename T0, typename T1, typename ...T>
418 struct is_same
419 {
420  static constexpr bool value = is_same<T0, T1>::value || is_same<T0, T...>::value;
421 };
422 
423 template<typename T0, typename T1>
424 struct is_same<T0, T1>
425 {
426  static constexpr bool value = false;
427 };
428 
429 template<typename T>
430 struct is_same<T, T>
431 {
432  static constexpr bool value = true;
433 };
434 
435 // --------------------------> is_floating_point <------------------------------------
436 
437 /// @brief C++11 implementation of std::is_floating_point
438 template<typename T>
440 {
441  static constexpr bool value = is_same<T, float, double>::value;
442 };
443 
444 // --------------------------> BuildTraits <------------------------------------
445 
446 /// @brief Define static boolean tests for template build types
447 template<typename T>
449 {
450  // check if T is an index type
455  // check if T is a compressed float type with fixed bit precision
456  static constexpr bool is_FpX = is_same<T, Fp4, Fp8, Fp16>::value;
457  // check if T is a compressed float type with fixed or variable bit precision
459  // check if T is a POD float type, i.e float or double
460  static constexpr bool is_float = is_floating_point<T>::value;
461  // check if T is a template specialization of LeafData<T>, i.e. has T mValues[512]
463 }; // BuildTraits
464 
465 // --------------------------> enable_if <------------------------------------
466 
467 /// @brief C++11 implementation of std::enable_if
468 template <bool, typename T = void>
469 struct enable_if
470 {
471 };
472 
473 template <typename T>
474 struct enable_if<true, T>
475 {
476  using type = T;
477 };
478 
479 // --------------------------> disable_if <------------------------------------
480 
481 template<bool, typename T = void>
483 {
484  typedef T type;
485 };
486 
487 template<typename T>
488 struct disable_if<true, T>
489 {
490 };
491 
492 // --------------------------> is_const <------------------------------------
493 
494 template<typename T>
495 struct is_const
496 {
497  static constexpr bool value = false;
498 };
499 
500 template<typename T>
501 struct is_const<const T>
502 {
503  static constexpr bool value = true;
504 };
505 
506 // --------------------------> is_pointer <------------------------------------
507 
508 /// @brief Trait used to identify template parameter that are pointers
509 /// @tparam T Template parameter to be tested
510 template<class T>
512 {
513  static constexpr bool value = false;
514 };
515 
516 /// @brief Template specialization of non-const pointers
517 /// @tparam T Template parameter to be tested
518 template<class T>
519 struct is_pointer<T*>
520 {
521  static constexpr bool value = true;
522 };
523 
524 /// @brief Template specialization of const pointers
525 /// @tparam T Template parameter to be tested
526 template<class T>
527 struct is_pointer<const T*>
528 {
529  static constexpr bool value = true;
530 };
531 
532 // --------------------------> remove_const <------------------------------------
533 
534 /// @brief Trait use to const from type. Default implementation is just a pass-through
535 /// @tparam T Type
536 /// @details remove_pointer<float>::type = float
537 template<typename T>
539 {
540  using type = T;
541 };
542 
543 /// @brief Template specialization of trait class use to remove const qualifier type from a type
544 /// @tparam T Type of the const type
545 /// @details remove_pointer<const float>::type = float
546 template<typename T>
547 struct remove_const<const T>
548 {
549  using type = T;
550 };
551 
552 // --------------------------> remove_reference <------------------------------------
553 
554 /// @brief Trait use to remove reference, i.e. "&", qualifier from a type. Default implementation is just a pass-through
555 /// @tparam T Type
556 /// @details remove_pointer<float>::type = float
557 template <typename T>
558 struct remove_reference {using type = T;};
559 
560 /// @brief Template specialization of trait class use to remove reference, i.e. "&", qualifier from a type
561 /// @tparam T Type of the reference
562 /// @details remove_pointer<float&>::type = float
563 template <typename T>
564 struct remove_reference<T&> {using type = T;};
565 
566 // --------------------------> remove_pointer <------------------------------------
567 
568 /// @brief Trait use to remove pointer, i.e. "*", qualifier from a type. Default implementation is just a pass-through
569 /// @tparam T Type
570 /// @details remove_pointer<float>::type = float
571 template <typename T>
572 struct remove_pointer {using type = T;};
573 
574 /// @brief Template specialization of trait class use to to remove pointer, i.e. "*", qualifier from a type
575 /// @tparam T Type of the pointer
576 /// @details remove_pointer<float*>::type = float
577 template <typename T>
578 struct remove_pointer<T*> {using type = T;};
579 
580 // --------------------------> match_const <------------------------------------
581 
582 /// @brief Trait used to transfer the const-ness of a reference type to another type
583 /// @tparam T Type whose const-ness needs to match the reference type
584 /// @tparam ReferenceT Reference type that is not const
585 /// @details match_const<const int, float>::type = int
586 /// match_const<int, float>::type = int
587 template<typename T, typename ReferenceT>
589 {
590  using type = typename remove_const<T>::type;
591 };
592 
593 /// @brief Template specialization used to transfer the const-ness of a reference type to another type
594 /// @tparam T Type that will adopt the const-ness of the reference type
595 /// @tparam ReferenceT Reference type that is const
596 /// @details match_const<const int, const float>::type = const int
597 /// match_const<int, const float>::type = const int
598 template<typename T, typename ReferenceT>
599 struct match_const<T, const ReferenceT>
600 {
601  using type = const typename remove_const<T>::type;
602 };
603 
604 // --------------------------> is_specialization <------------------------------------
605 
606 /// @brief Metafunction used to determine if the first template
607 /// parameter is a specialization of the class template
608 /// given in the second template parameter.
609 ///
610 /// @details is_specialization<Vec3<float>, Vec3>::value == true;
611 /// is_specialization<Vec3f, Vec3>::value == true;
612 /// is_specialization<std::vector<float>, std::vector>::value == true;
613 template<typename AnyType, template<typename...> class TemplateType>
615 {
616  static const bool value = false;
617 };
618 template<typename... Args, template<typename...> class TemplateType>
619 struct is_specialization<TemplateType<Args...>, TemplateType>
620 {
621  static const bool value = true;
622 };
623 
624 // --------------------------> BuildToValueMap <------------------------------------
625 
626 /// @brief Maps one type (e.g. the build types above) to other (actual) types
627 template<typename T>
629 {
630  using Type = T;
631  using type = T;
632 };
633 
634 template<>
636 {
637  using Type = uint64_t;
638  using type = uint64_t;
639 };
640 
641 template<>
643 {
644  using Type = uint64_t;
645  using type = uint64_t;
646 };
647 
648 template<>
650 {
651  using Type = uint64_t;
652  using type = uint64_t;
653 };
654 
655 template<>
657 {
658  using Type = uint64_t;
659  using type = uint64_t;
660 };
661 
662 template<>
664 {
665  using Type = bool;
666  using type = bool;
667 };
668 
669 template<>
671 {
672  using Type = float;
673  using type = float;
674 };
675 
676 template<>
678 {
679  using Type = float;
680  using type = float;
681 };
682 
683 template<>
685 {
686  using Type = float;
687  using type = float;
688 };
689 
690 template<>
692 {
693  using Type = float;
694  using type = float;
695 };
696 
697 template<>
699 {
700  using Type = float;
701  using type = float;
702 };
703 
704 template<>
706 {
707  using Type = uint64_t;
708  using type = uint64_t;
709 };
710 
711 // --------------------------> utility functions related to alignment <------------------------------------
712 
713 /// @brief return true if the specified pointer is aligned
714 __hostdev__ inline static bool isAligned(const void* p)
715 {
716  return uint64_t(p) % NANOVDB_DATA_ALIGNMENT == 0;
717 }
718 
719 /// @brief return true if the specified pointer is aligned and not NULL
720 __hostdev__ inline static bool isValid(const void* p)
721 {
722  return p != nullptr && uint64_t(p) % NANOVDB_DATA_ALIGNMENT == 0;
723 }
724 
725 /// @brief return the smallest number of bytes that when added to the specified pointer results in an aligned pointer
726 __hostdev__ inline static uint64_t alignmentPadding(const void* p)
727 {
728  NANOVDB_ASSERT(p);
730 }
731 
732 /// @brief offset the specified pointer so it is aligned.
733 template <typename T>
734 __hostdev__ inline static T* alignPtr(T* p)
735 {
736  NANOVDB_ASSERT(p);
737  return reinterpret_cast<T*>( (uint8_t*)p + alignmentPadding(p) );
738 }
739 
740 /// @brief offset the specified const pointer so it is aligned.
741 template <typename T>
742 __hostdev__ inline static const T* alignPtr(const T* p)
743 {
744  NANOVDB_ASSERT(p);
745  return reinterpret_cast<const T*>( (const uint8_t*)p + alignmentPadding(p) );
746 }
747 
748 // --------------------------> PtrDiff <------------------------------------
749 
750 /// @brief Compute the distance, in bytes, between two pointers
751 /// @tparam T1 Type of the first pointer
752 /// @tparam T2 Type of the second pointer
753 /// @param p fist pointer, assumed to NOT be NULL
754 /// @param q second pointer, assumed to NOT be NULL
755 /// @return signed distance between pointer addresses in units of bytes
756 template<typename T1, typename T2>
757 __hostdev__ inline static int64_t PtrDiff(const T1* p, const T2* q)
758 {
759  NANOVDB_ASSERT(p && q);
760  return reinterpret_cast<const char*>(p) - reinterpret_cast<const char*>(q);
761 }
762 
763 // --------------------------> PtrAdd <------------------------------------
764 
765 /// @brief Adds a byte offset of a non-const pointer to produce another non-const pointer
766 /// @tparam DstT Type of the return pointer
767 /// @tparam SrcT Type of the input pointer
768 /// @param p non-const input pointer, assumed to NOT be NULL
769 /// @param offset signed byte offset
770 /// @return a non-const pointer defined as the offset of an input pointer
771 template<typename DstT, typename SrcT>
772 __hostdev__ inline static DstT* PtrAdd(SrcT* p, int64_t offset)
773 {
774  NANOVDB_ASSERT(p);
775  return reinterpret_cast<DstT*>(reinterpret_cast<char*>(p) + offset);
776 }
777 
778 /// @brief Adds a byte offset of a const pointer to produce another const pointer
779 /// @tparam DstT Type of the return pointer
780 /// @tparam SrcT Type of the input pointer
781 /// @param p const input pointer, assumed to NOT be NULL
782 /// @param offset signed byte offset
783 /// @return a const pointer defined as the offset of a const input pointer
784 template<typename DstT, typename SrcT>
785 __hostdev__ inline static const DstT* PtrAdd(const SrcT* p, int64_t offset)
786 {
787  NANOVDB_ASSERT(p);
788  return reinterpret_cast<const DstT*>(reinterpret_cast<const char*>(p) + offset);
789 }
790 
791 // --------------------------> isFloatingPoint(GridType) <------------------------------------
792 
793 /// @brief return true if the GridType maps to a floating point type
794 __hostdev__ inline bool isFloatingPoint(GridType gridType)
795 {
796  return gridType == GridType::Float ||
797  gridType == GridType::Double ||
798  gridType == GridType::Half ||
799  gridType == GridType::Fp4 ||
800  gridType == GridType::Fp8 ||
801  gridType == GridType::Fp16 ||
802  gridType == GridType::FpN;
803 }
804 
805 // --------------------------> isFloatingPointVector(GridType) <------------------------------------
806 
807 /// @brief return true if the GridType maps to a floating point vec3.
809 {
810  return gridType == GridType::Vec3f ||
811  gridType == GridType::Vec3d ||
812  gridType == GridType::Vec4f ||
813  gridType == GridType::Vec4d;
814 }
815 
816 // --------------------------> isInteger(GridType) <------------------------------------
817 
818 /// @brief Return true if the GridType maps to a POD integer type.
819 /// @details These types are used to associate a voxel with a POD integer type
820 __hostdev__ inline bool isInteger(GridType gridType)
821 {
822  return gridType == GridType::Int16 ||
823  gridType == GridType::Int32 ||
824  gridType == GridType::Int64 ||
825  gridType == GridType::UInt32;
826 }
827 
828 // --------------------------> isIndex(GridType) <------------------------------------
829 
830 /// @brief Return true if the GridType maps to a special index type (not a POD integer type).
831 /// @details These types are used to index from a voxel into an external array of values, e.g. sidecar or blind data.
832 __hostdev__ inline bool isIndex(GridType gridType)
833 {
834  return gridType == GridType::Index ||// index both active and inactive values
835  gridType == GridType::OnIndex ||// index active values only
836  gridType == GridType::IndexMask ||// as Index, but with an additional mask
837  gridType == GridType::OnIndexMask;// as OnIndex, but with an additional mask
838 }
839 
840 // --------------------------> memcpy64 <------------------------------------
841 
842 /// @brief copy 64 bit words from @c src to @c dst
843 /// @param dst 64 bit aligned pointer to destination
844 /// @param src 64 bit aligned pointer to source
845 /// @param word_count number of 64 bit words to be copied
846 /// @return destination pointer @c dst
847 /// @warning @c src and @c dst cannot overlap and should both be 64 bit aligned
848 __hostdev__ inline static void* memcpy64(void *dst, const void *src, size_t word_count)
849 {
850  NANOVDB_ASSERT(uint64_t(dst) % 8 == 0 && uint64_t(src) % 8 == 0);
851  auto *d = reinterpret_cast<uint64_t*>(dst), *e = d + word_count;
852  auto *s = reinterpret_cast<const uint64_t*>(src);
853  while (d != e) *d++ = *s++;
854  return dst;
855 }
856 
857 // --------------------------> isValue(GridType, GridClass) <------------------------------------
858 
859 /// @brief return true if the combination of GridType and GridClass is valid.
860 __hostdev__ inline bool isValid(GridType gridType, GridClass gridClass)
861 {
862  if (gridClass == GridClass::LevelSet || gridClass == GridClass::FogVolume) {
863  return isFloatingPoint(gridType);
864  } else if (gridClass == GridClass::Staggered) {
865  return isFloatingPointVector(gridType);
866  } else if (gridClass == GridClass::PointIndex || gridClass == GridClass::PointData) {
867  return gridType == GridType::PointIndex || gridType == GridType::UInt32;
868  } else if (gridClass == GridClass::Topology) {
869  return gridType == GridType::Mask;
870  } else if (gridClass == GridClass::IndexGrid) {
871  return isIndex(gridType);
872  } else if (gridClass == GridClass::VoxelVolume) {
873  return gridType == GridType::RGBA8 || gridType == GridType::Float ||
874  gridType == GridType::Double || gridType == GridType::Vec3f ||
875  gridType == GridType::Vec3d || gridType == GridType::UInt32;
876  }
877  return gridClass < GridClass::End && gridType < GridType::End; // any valid combination
878 }
879 
880 // --------------------------> validation of blind data meta data <------------------------------------
881 
882 /// @brief return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid.
883 __hostdev__ inline bool isValid(const GridBlindDataClass& blindClass,
884  const GridBlindDataSemantic& blindSemantics,
885  const GridType& blindType)
886 {
887  bool test = false;
888  switch (blindClass) {
890  test = (blindSemantics == GridBlindDataSemantic::Unknown ||
891  blindSemantics == GridBlindDataSemantic::PointId) &&
892  isInteger(blindType);
893  break;
895  if (blindSemantics == GridBlindDataSemantic::PointPosition ||
896  blindSemantics == GridBlindDataSemantic::WorldCoords) {
897  test = blindType == GridType::Vec3f || blindType == GridType::Vec3d;
898  } else if (blindSemantics == GridBlindDataSemantic::GridCoords) {
899  test = blindType == GridType::Vec3f;
900  } else if (blindSemantics == GridBlindDataSemantic::VoxelCoords) {
901  test = blindType == GridType::Vec3f || blindType == GridType::Vec3u8 || blindType == GridType::Vec3u16;
902  } else {
903  test = blindSemantics != GridBlindDataSemantic::PointId;
904  }
905  break;
907  test = blindSemantics == GridBlindDataSemantic::Unknown && blindType == GridType::Unknown;
908  break;
909  default: // captures blindClass == Unknown and ChannelArray
910  test = blindClass < GridBlindDataClass::End &&
911  blindSemantics < GridBlindDataSemantic::End &&
912  blindType < GridType::End; // any valid combination
913  break;
914  }
915  //if (!test) printf("Invalid combination: GridBlindDataClass=%u, GridBlindDataSemantic=%u, GridType=%u\n",(uint32_t)blindClass, (uint32_t)blindSemantics, (uint32_t)blindType);
916  return test;
917 }
918 
919 // ----------------------------> Version class <-------------------------------------
920 
921 /// @brief Bit-compacted representation of all three version numbers
922 ///
923 /// @details major is the top 11 bits, minor is the 11 middle bits and patch is the lower 10 bits
924 class Version
925 {
926  uint32_t mData; // 11 + 11 + 10 bit packing of major + minor + patch
927 public:
928  /// @brief Default constructor
930  : mData(uint32_t(NANOVDB_MAJOR_VERSION_NUMBER) << 21 |
931  uint32_t(NANOVDB_MINOR_VERSION_NUMBER) << 10 |
933  {
934  }
935  /// @brief Constructor from a raw uint32_t data representation
936  __hostdev__ Version(uint32_t data) : mData(data) {}
937  /// @brief Constructor from major.minor.patch version numbers
938  __hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
939  : mData(major << 21 | minor << 10 | patch)
940  {
941  NANOVDB_ASSERT(major < (1u << 11)); // max value of major is 2047
942  NANOVDB_ASSERT(minor < (1u << 11)); // max value of minor is 2047
943  NANOVDB_ASSERT(patch < (1u << 10)); // max value of patch is 1023
944  }
945  __hostdev__ bool operator==(const Version& rhs) const { return mData == rhs.mData; }
946  __hostdev__ bool operator<( const Version& rhs) const { return mData < rhs.mData; }
947  __hostdev__ bool operator<=(const Version& rhs) const { return mData <= rhs.mData; }
948  __hostdev__ bool operator>( const Version& rhs) const { return mData > rhs.mData; }
949  __hostdev__ bool operator>=(const Version& rhs) const { return mData >= rhs.mData; }
950  __hostdev__ uint32_t id() const { return mData; }
951  __hostdev__ uint32_t getMajor() const { return (mData >> 21) & ((1u << 11) - 1); }
952  __hostdev__ uint32_t getMinor() const { return (mData >> 10) & ((1u << 11) - 1); }
953  __hostdev__ uint32_t getPatch() const { return mData & ((1u << 10) - 1); }
954  __hostdev__ bool isCompatible() const { return this->getMajor() == uint32_t(NANOVDB_MAJOR_VERSION_NUMBER); }
955  /// @brief Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER
956  /// @return return 0 if the major version equals NANOVDB_MAJOR_VERSION_NUMBER, else a negative age if this
957  /// instance has a smaller major verion (is older), and a positive age if it is newer, i.e. larger.
958  __hostdev__ int age() const {return int(this->getMajor()) - int(NANOVDB_MAJOR_VERSION_NUMBER);}
959 
960 #ifndef __CUDACC_RTC__
961  /// @brief returns a c-string of the semantic version, i.e. major.minor.patch
962  const char* c_str() const
963  {
964  char* buffer = (char*)malloc(4 + 1 + 4 + 1 + 4 + 1); // xxxx.xxxx.xxxx\0
965  snprintf(buffer, 4 + 1 + 4 + 1 + 4 + 1, "%u.%u.%u", this->getMajor(), this->getMinor(), this->getPatch()); // Prevents overflows by enforcing a fixed size of buffer
966  return buffer;
967  }
968 #endif
969 }; // Version
970 
971 // ----------------------------> Various math functions <-------------------------------------
972 
973 //@{
974 /// @brief Pi constant taken from Boost to match old behaviour
975 template<typename T>
976 inline __hostdev__ constexpr T pi()
977 {
978  return 3.141592653589793238462643383279502884e+00;
979 }
980 template<>
981 inline __hostdev__ constexpr float pi()
982 {
983  return 3.141592653589793238462643383279502884e+00F;
984 }
985 template<>
986 inline __hostdev__ constexpr double pi()
987 {
988  return 3.141592653589793238462643383279502884e+00;
989 }
990 template<>
991 inline __hostdev__ constexpr long double pi()
992 {
993  return 3.141592653589793238462643383279502884e+00L;
994 }
995 //@}
996 
997 //@{
998 /// Tolerance for floating-point comparison
999 template<typename T>
1000 struct Tolerance;
1001 template<>
1003 {
1004  __hostdev__ static float value() { return 1e-8f; }
1005 };
1006 template<>
1007 struct Tolerance<double>
1008 {
1009  __hostdev__ static double value() { return 1e-15; }
1010 };
1011 //@}
1012 
1013 //@{
1014 /// Delta for small floating-point offsets
1015 template<typename T>
1016 struct Delta;
1017 template<>
1018 struct Delta<float>
1019 {
1020  __hostdev__ static float value() { return 1e-5f; }
1021 };
1022 template<>
1023 struct Delta<double>
1024 {
1025  __hostdev__ static double value() { return 1e-9; }
1026 };
1027 //@}
1028 
1029 //@{
1030 /// Maximum floating-point values
1031 template<typename T>
1032 struct Maximum;
1033 #if defined(__CUDA_ARCH__) || defined(__HIP__)
1034 template<>
1035 struct Maximum<int>
1036 {
1037  __hostdev__ static int value() { return 2147483647; }
1038 };
1039 template<>
1040 struct Maximum<uint32_t>
1041 {
1042  __hostdev__ static uint32_t value() { return 4294967295u; }
1043 };
1044 template<>
1045 struct Maximum<float>
1046 {
1047  __hostdev__ static float value() { return 1e+38f; }
1048 };
1049 template<>
1050 struct Maximum<double>
1051 {
1052  __hostdev__ static double value() { return 1e+308; }
1053 };
1054 #else
1055 template<typename T>
1056 struct Maximum
1057 {
1058  static T value() { return std::numeric_limits<T>::max(); }
1059 };
1060 #endif
1061 //@}
1062 
1063 template<typename Type>
1064 __hostdev__ inline bool isApproxZero(const Type& x)
1065 {
1066  return !(x > Tolerance<Type>::value()) && !(x < -Tolerance<Type>::value());
1067 }
1068 
1069 template<typename Type>
1071 {
1072  return (a < b) ? a : b;
1073 }
1074 __hostdev__ inline int32_t Min(int32_t a, int32_t b)
1075 {
1076  return int32_t(fminf(float(a), float(b)));
1077 }
1078 __hostdev__ inline uint32_t Min(uint32_t a, uint32_t b)
1079 {
1080  return uint32_t(fminf(float(a), float(b)));
1081 }
1082 __hostdev__ inline float Min(float a, float b)
1083 {
1084  return fminf(a, b);
1085 }
1086 __hostdev__ inline double Min(double a, double b)
1087 {
1088  return fmin(a, b);
1089 }
1090 template<typename Type>
1092 {
1093  return (a > b) ? a : b;
1094 }
1095 
1096 __hostdev__ inline int32_t Max(int32_t a, int32_t b)
1097 {
1098  return int32_t(fmaxf(float(a), float(b)));
1099 }
1100 __hostdev__ inline uint32_t Max(uint32_t a, uint32_t b)
1101 {
1102  return uint32_t(fmaxf(float(a), float(b)));
1103 }
1104 __hostdev__ inline float Max(float a, float b)
1105 {
1106  return fmaxf(a, b);
1107 }
1108 __hostdev__ inline double Max(double a, double b)
1109 {
1110  return fmax(a, b);
1111 }
1112 __hostdev__ inline float Clamp(float x, float a, float b)
1113 {
1114  return Max(Min(x, b), a);
1115 }
1116 __hostdev__ inline double Clamp(double x, double a, double b)
1117 {
1118  return Max(Min(x, b), a);
1119 }
1120 
1121 __hostdev__ inline float Fract(float x)
1122 {
1123  return x - floorf(x);
1124 }
1125 __hostdev__ inline double Fract(double x)
1126 {
1127  return x - floor(x);
1128 }
1129 
1130 __hostdev__ inline int32_t Floor(float x)
1131 {
1132  return int32_t(floorf(x));
1133 }
1134 __hostdev__ inline int32_t Floor(double x)
1135 {
1136  return int32_t(floor(x));
1137 }
1138 
1139 __hostdev__ inline int32_t Ceil(float x)
1140 {
1141  return int32_t(ceilf(x));
1142 }
1143 __hostdev__ inline int32_t Ceil(double x)
1144 {
1145  return int32_t(ceil(x));
1146 }
1147 
1148 template<typename T>
1149 __hostdev__ inline T Pow2(T x)
1150 {
1151  return x * x;
1152 }
1153 
1154 template<typename T>
1155 __hostdev__ inline T Pow3(T x)
1156 {
1157  return x * x * x;
1158 }
1159 
1160 template<typename T>
1161 __hostdev__ inline T Pow4(T x)
1162 {
1163  return Pow2(x * x);
1164 }
1165 template<typename T>
1166 __hostdev__ inline T Abs(T x)
1167 {
1168  return x < 0 ? -x : x;
1169 }
1170 
1171 template<>
1172 __hostdev__ inline float Abs(float x)
1173 {
1174  return fabsf(x);
1175 }
1176 
1177 template<>
1178 __hostdev__ inline double Abs(double x)
1179 {
1180  return fabs(x);
1181 }
1182 
1183 template<>
1184 __hostdev__ inline int Abs(int x)
1185 {
1186  return abs(x);
1187 }
1188 
1189 template<typename CoordT, typename RealT, template<typename> class Vec3T>
1190 __hostdev__ inline CoordT Round(const Vec3T<RealT>& xyz);
1191 
1192 template<typename CoordT, template<typename> class Vec3T>
1193 __hostdev__ inline CoordT Round(const Vec3T<float>& xyz)
1194 {
1195  return CoordT(int32_t(rintf(xyz[0])), int32_t(rintf(xyz[1])), int32_t(rintf(xyz[2])));
1196  //return CoordT(int32_t(roundf(xyz[0])), int32_t(roundf(xyz[1])), int32_t(roundf(xyz[2])) );
1197  //return CoordT(int32_t(floorf(xyz[0] + 0.5f)), int32_t(floorf(xyz[1] + 0.5f)), int32_t(floorf(xyz[2] + 0.5f)));
1198 }
1199 
1200 template<typename CoordT, template<typename> class Vec3T>
1201 __hostdev__ inline CoordT Round(const Vec3T<double>& xyz)
1202 {
1203  return CoordT(int32_t(floor(xyz[0] + 0.5)), int32_t(floor(xyz[1] + 0.5)), int32_t(floor(xyz[2] + 0.5)));
1204 }
1205 
1206 template<typename CoordT, typename RealT, template<typename> class Vec3T>
1207 __hostdev__ inline CoordT RoundDown(const Vec3T<RealT>& xyz)
1208 {
1209  return CoordT(Floor(xyz[0]), Floor(xyz[1]), Floor(xyz[2]));
1210 }
1211 
1212 //@{
1213 /// Return the square root of a floating-point value.
1214 __hostdev__ inline float Sqrt(float x)
1215 {
1216  return sqrtf(x);
1217 }
1218 __hostdev__ inline double Sqrt(double x)
1219 {
1220  return sqrt(x);
1221 }
1222 //@}
1223 
1224 /// Return the sign of the given value as an integer (either -1, 0 or 1).
1225 template<typename T>
1226 __hostdev__ inline T Sign(const T& x)
1227 {
1228  return ((T(0) < x) ? T(1) : T(0)) - ((x < T(0)) ? T(1) : T(0));
1229 }
1230 
1231 template<typename Vec3T>
1232 __hostdev__ inline int MinIndex(const Vec3T& v)
1233 {
1234 #if 0
1235  static const int hashTable[8] = {2, 1, 9, 1, 2, 9, 0, 0}; //9 are dummy values
1236  const int hashKey = ((v[0] < v[1]) << 2) + ((v[0] < v[2]) << 1) + (v[1] < v[2]); // ?*4+?*2+?*1
1237  return hashTable[hashKey];
1238 #else
1239  if (v[0] < v[1] && v[0] < v[2])
1240  return 0;
1241  if (v[1] < v[2])
1242  return 1;
1243  else
1244  return 2;
1245 #endif
1246 }
1247 
1248 template<typename Vec3T>
1249 __hostdev__ inline int MaxIndex(const Vec3T& v)
1250 {
1251 #if 0
1252  static const int hashTable[8] = {2, 1, 9, 1, 2, 9, 0, 0}; //9 are dummy values
1253  const int hashKey = ((v[0] > v[1]) << 2) + ((v[0] > v[2]) << 1) + (v[1] > v[2]); // ?*4+?*2+?*1
1254  return hashTable[hashKey];
1255 #else
1256  if (v[0] > v[1] && v[0] > v[2])
1257  return 0;
1258  if (v[1] > v[2])
1259  return 1;
1260  else
1261  return 2;
1262 #endif
1263 }
1264 
1265 /// @brief round up byteSize to the nearest wordSize, e.g. to align to machine word: AlignUp<sizeof(size_t)(n)
1266 ///
1267 /// @details both wordSize and byteSize are in byte units
1268 template<uint64_t wordSize>
1269 __hostdev__ inline uint64_t AlignUp(uint64_t byteCount)
1270 {
1271  const uint64_t r = byteCount % wordSize;
1272  return r ? byteCount - r + wordSize : byteCount;
1273 }
1274 
1275 // ------------------------------> Coord <--------------------------------------
1276 
1277 // forward declaration so we can define Coord::asVec3s and Coord::asVec3d
1278 template<typename>
1279 class Vec3;
1280 
1281 /// @brief Signed (i, j, k) 32-bit integer coordinate class, similar to openvdb::math::Coord
1282 class Coord
1283 {
1284  int32_t mVec[3]; // private member data - three signed index coordinates
1285 public:
1286  using ValueType = int32_t;
1287  using IndexType = uint32_t;
1288 
1289  /// @brief Initialize all coordinates to zero.
1291  : mVec{0, 0, 0}
1292  {
1293  }
1294 
1295  /// @brief Initializes all coordinates to the given signed integer.
1297  : mVec{n, n, n}
1298  {
1299  }
1300 
1301  /// @brief Initializes coordinate to the given signed integers.
1303  : mVec{i, j, k}
1304  {
1305  }
1306 
1308  : mVec{ptr[0], ptr[1], ptr[2]}
1309  {
1310  }
1311 
1312  __hostdev__ int32_t x() const { return mVec[0]; }
1313  __hostdev__ int32_t y() const { return mVec[1]; }
1314  __hostdev__ int32_t z() const { return mVec[2]; }
1315 
1316  __hostdev__ int32_t& x() { return mVec[0]; }
1317  __hostdev__ int32_t& y() { return mVec[1]; }
1318  __hostdev__ int32_t& z() { return mVec[2]; }
1319 
1320  __hostdev__ static Coord max() { return Coord(int32_t((1u << 31) - 1)); }
1321 
1322  __hostdev__ static Coord min() { return Coord(-int32_t((1u << 31) - 1) - 1); }
1323 
1324  __hostdev__ static size_t memUsage() { return sizeof(Coord); }
1325 
1326  /// @brief Return a const reference to the given Coord component.
1327  /// @warning The argument is assumed to be 0, 1, or 2.
1328  __hostdev__ const ValueType& operator[](IndexType i) const { return mVec[i]; }
1329 
1330  /// @brief Return a non-const reference to the given Coord component.
1331  /// @warning The argument is assumed to be 0, 1, or 2.
1332  __hostdev__ ValueType& operator[](IndexType i) { return mVec[i]; }
1333 
1334  /// @brief Assignment operator that works with openvdb::Coord
1335  template<typename CoordT>
1336  __hostdev__ Coord& operator=(const CoordT& other)
1337  {
1338  static_assert(sizeof(Coord) == sizeof(CoordT), "Mis-matched sizeof");
1339  mVec[0] = other[0];
1340  mVec[1] = other[1];
1341  mVec[2] = other[2];
1342  return *this;
1343  }
1344 
1345  /// @brief Return a new instance with coordinates masked by the given unsigned integer.
1346  __hostdev__ Coord operator&(IndexType n) const { return Coord(mVec[0] & n, mVec[1] & n, mVec[2] & n); }
1347 
1348  // @brief Return a new instance with coordinates left-shifted by the given unsigned integer.
1349  __hostdev__ Coord operator<<(IndexType n) const { return Coord(mVec[0] << n, mVec[1] << n, mVec[2] << n); }
1350 
1351  // @brief Return a new instance with coordinates right-shifted by the given unsigned integer.
1352  __hostdev__ Coord operator>>(IndexType n) const { return Coord(mVec[0] >> n, mVec[1] >> n, mVec[2] >> n); }
1353 
1354  /// @brief Return true if this Coord is lexicographically less than the given Coord.
1355  __hostdev__ bool operator<(const Coord& rhs) const
1356  {
1357  return mVec[0] < rhs[0] ? true
1358  : mVec[0] > rhs[0] ? false
1359  : mVec[1] < rhs[1] ? true
1360  : mVec[1] > rhs[1] ? false
1361  : mVec[2] < rhs[2] ? true : false;
1362  }
1363 
1364  /// @brief Return true if this Coord is lexicographically less or equal to the given Coord.
1365  __hostdev__ bool operator<=(const Coord& rhs) const
1366  {
1367  return mVec[0] < rhs[0] ? true
1368  : mVec[0] > rhs[0] ? false
1369  : mVec[1] < rhs[1] ? true
1370  : mVec[1] > rhs[1] ? false
1371  : mVec[2] <=rhs[2] ? true : false;
1372  }
1373 
1374  // @brief Return true if the Coord components are identical.
1375  __hostdev__ bool operator==(const Coord& rhs) const { return mVec[0] == rhs[0] && mVec[1] == rhs[1] && mVec[2] == rhs[2]; }
1376  __hostdev__ bool operator!=(const Coord& rhs) const { return mVec[0] != rhs[0] || mVec[1] != rhs[1] || mVec[2] != rhs[2]; }
1378  {
1379  mVec[0] &= n;
1380  mVec[1] &= n;
1381  mVec[2] &= n;
1382  return *this;
1383  }
1385  {
1386  mVec[0] <<= n;
1387  mVec[1] <<= n;
1388  mVec[2] <<= n;
1389  return *this;
1390  }
1392  {
1393  mVec[0] >>= n;
1394  mVec[1] >>= n;
1395  mVec[2] >>= n;
1396  return *this;
1397  }
1399  {
1400  mVec[0] += n;
1401  mVec[1] += n;
1402  mVec[2] += n;
1403  return *this;
1404  }
1405  __hostdev__ Coord operator+(const Coord& rhs) const { return Coord(mVec[0] + rhs[0], mVec[1] + rhs[1], mVec[2] + rhs[2]); }
1406  __hostdev__ Coord operator-(const Coord& rhs) const { return Coord(mVec[0] - rhs[0], mVec[1] - rhs[1], mVec[2] - rhs[2]); }
1407  __hostdev__ Coord operator-() const { return Coord(-mVec[0], -mVec[1], -mVec[2]); }
1409  {
1410  mVec[0] += rhs[0];
1411  mVec[1] += rhs[1];
1412  mVec[2] += rhs[2];
1413  return *this;
1414  }
1416  {
1417  mVec[0] -= rhs[0];
1418  mVec[1] -= rhs[1];
1419  mVec[2] -= rhs[2];
1420  return *this;
1421  }
1422 
1423  /// @brief Perform a component-wise minimum with the other Coord.
1425  {
1426  if (other[0] < mVec[0])
1427  mVec[0] = other[0];
1428  if (other[1] < mVec[1])
1429  mVec[1] = other[1];
1430  if (other[2] < mVec[2])
1431  mVec[2] = other[2];
1432  return *this;
1433  }
1434 
1435  /// @brief Perform a component-wise maximum with the other Coord.
1437  {
1438  if (other[0] > mVec[0])
1439  mVec[0] = other[0];
1440  if (other[1] > mVec[1])
1441  mVec[1] = other[1];
1442  if (other[2] > mVec[2])
1443  mVec[2] = other[2];
1444  return *this;
1445  }
1446 #if defined(__CUDACC__) // the following functions only run on the GPU!
1447  __device__ inline Coord& minComponentAtomic(const Coord& other)
1448  {
1449  atomicMin(&mVec[0], other[0]);
1450  atomicMin(&mVec[1], other[1]);
1451  atomicMin(&mVec[2], other[2]);
1452  return *this;
1453  }
1454  __device__ inline Coord& maxComponentAtomic(const Coord& other)
1455  {
1456  atomicMax(&mVec[0], other[0]);
1457  atomicMax(&mVec[1], other[1]);
1458  atomicMax(&mVec[2], other[2]);
1459  return *this;
1460  }
1461 #endif
1462 
1464  {
1465  return Coord(mVec[0] + dx, mVec[1] + dy, mVec[2] + dz);
1466  }
1467 
1468  __hostdev__ Coord offsetBy(ValueType n) const { return this->offsetBy(n, n, n); }
1469 
1470  /// Return true if any of the components of @a a are smaller than the
1471  /// corresponding components of @a b.
1472  __hostdev__ static inline bool lessThan(const Coord& a, const Coord& b)
1473  {
1474  return (a[0] < b[0] || a[1] < b[1] || a[2] < b[2]);
1475  }
1476 
1477  /// @brief Return the largest integer coordinates that are not greater
1478  /// than @a xyz (node centered conversion).
1479  template<typename Vec3T>
1480  __hostdev__ static Coord Floor(const Vec3T& xyz) { return Coord(nanovdb::Floor(xyz[0]), nanovdb::Floor(xyz[1]), nanovdb::Floor(xyz[2])); }
1481 
1482  /// @brief Return a hash key derived from the existing coordinates.
1483  /// @details The hash function is originally taken from the SIGGRAPH paper:
1484  /// "VDB: High-resolution sparse volumes with dynamic topology"
1485  /// and the prime numbers are modified based on the ACM Transactions on Graphics paper:
1486  /// "Real-time 3D reconstruction at scale using voxel hashing" (the second number had a typo!)
1487  template<int Log2N = 3 + 4 + 5>
1488  __hostdev__ uint32_t hash() const { return ((1 << Log2N) - 1) & (mVec[0] * 73856093 ^ mVec[1] * 19349669 ^ mVec[2] * 83492791); }
1489 
1490  /// @brief Return the octant of this Coord
1491  //__hostdev__ size_t octant() const { return (uint32_t(mVec[0])>>31) | ((uint32_t(mVec[1])>>31)<<1) | ((uint32_t(mVec[2])>>31)<<2); }
1492  __hostdev__ uint8_t octant() const { return (uint8_t(bool(mVec[0] & (1u << 31)))) |
1493  (uint8_t(bool(mVec[1] & (1u << 31))) << 1) |
1494  (uint8_t(bool(mVec[2] & (1u << 31))) << 2); }
1495 
1496  /// @brief Return a single precision floating-point vector of this coordinate
1497  __hostdev__ inline Vec3<float> asVec3s() const;
1498 
1499  /// @brief Return a double precision floating-point vector of this coordinate
1500  __hostdev__ inline Vec3<double> asVec3d() const;
1501 
1502  // returns a copy of itself, so it mimics the behaviour of Vec3<T>::round()
1503  __hostdev__ inline Coord round() const { return *this; }
1504 }; // Coord class
1505 
1506 // ----------------------------> Vec3 <--------------------------------------
1507 
1508 /// @brief A simple vector class with three components, similar to openvdb::math::Vec3
1509 template<typename T>
1510 class Vec3
1511 {
1512  T mVec[3];
1513 
1514 public:
1515  static const int SIZE = 3;
1516  static const int size = 3; // in openvdb::math::Tuple
1517  using ValueType = T;
1518  Vec3() = default;
1519  __hostdev__ explicit Vec3(T x)
1520  : mVec{x, x, x}
1521  {
1522  }
1523  __hostdev__ Vec3(T x, T y, T z)
1524  : mVec{x, y, z}
1525  {
1526  }
1527  template<template<class> class Vec3T, class T2>
1528  __hostdev__ Vec3(const Vec3T<T2>& v)
1529  : mVec{T(v[0]), T(v[1]), T(v[2])}
1530  {
1531  static_assert(Vec3T<T2>::size == size, "expected Vec3T::size==3!");
1532  }
1533  template<typename T2>
1534  __hostdev__ explicit Vec3(const Vec3<T2>& v)
1535  : mVec{T(v[0]), T(v[1]), T(v[2])}
1536  {
1537  }
1538  __hostdev__ explicit Vec3(const Coord& ijk)
1539  : mVec{T(ijk[0]), T(ijk[1]), T(ijk[2])}
1540  {
1541  }
1542  __hostdev__ bool operator==(const Vec3& rhs) const { return mVec[0] == rhs[0] && mVec[1] == rhs[1] && mVec[2] == rhs[2]; }
1543  __hostdev__ bool operator!=(const Vec3& rhs) const { return mVec[0] != rhs[0] || mVec[1] != rhs[1] || mVec[2] != rhs[2]; }
1544  template<template<class> class Vec3T, class T2>
1545  __hostdev__ Vec3& operator=(const Vec3T<T2>& rhs)
1546  {
1547  static_assert(Vec3T<T2>::size == size, "expected Vec3T::size==3!");
1548  mVec[0] = rhs[0];
1549  mVec[1] = rhs[1];
1550  mVec[2] = rhs[2];
1551  return *this;
1552  }
1553  __hostdev__ const T& operator[](int i) const { return mVec[i]; }
1554  __hostdev__ T& operator[](int i) { return mVec[i]; }
1555  template<typename Vec3T>
1556  __hostdev__ T dot(const Vec3T& v) const { return mVec[0] * v[0] + mVec[1] * v[1] + mVec[2] * v[2]; }
1557  template<typename Vec3T>
1558  __hostdev__ Vec3 cross(const Vec3T& v) const
1559  {
1560  return Vec3(mVec[1] * v[2] - mVec[2] * v[1],
1561  mVec[2] * v[0] - mVec[0] * v[2],
1562  mVec[0] * v[1] - mVec[1] * v[0]);
1563  }
1565  {
1566  return mVec[0] * mVec[0] + mVec[1] * mVec[1] + mVec[2] * mVec[2]; // 5 flops
1567  }
1568  __hostdev__ T length() const { return Sqrt(this->lengthSqr()); }
1569  __hostdev__ Vec3 operator-() const { return Vec3(-mVec[0], -mVec[1], -mVec[2]); }
1570  __hostdev__ Vec3 operator*(const Vec3& v) const { return Vec3(mVec[0] * v[0], mVec[1] * v[1], mVec[2] * v[2]); }
1571  __hostdev__ Vec3 operator/(const Vec3& v) const { return Vec3(mVec[0] / v[0], mVec[1] / v[1], mVec[2] / v[2]); }
1572  __hostdev__ Vec3 operator+(const Vec3& v) const { return Vec3(mVec[0] + v[0], mVec[1] + v[1], mVec[2] + v[2]); }
1573  __hostdev__ Vec3 operator-(const Vec3& v) const { return Vec3(mVec[0] - v[0], mVec[1] - v[1], mVec[2] - v[2]); }
1574  __hostdev__ Vec3 operator+(const Coord& ijk) const { return Vec3(mVec[0] + ijk[0], mVec[1] + ijk[1], mVec[2] + ijk[2]); }
1575  __hostdev__ Vec3 operator-(const Coord& ijk) const { return Vec3(mVec[0] - ijk[0], mVec[1] - ijk[1], mVec[2] - ijk[2]); }
1576  __hostdev__ Vec3 operator*(const T& s) const { return Vec3(s * mVec[0], s * mVec[1], s * mVec[2]); }
1577  __hostdev__ Vec3 operator/(const T& s) const { return (T(1) / s) * (*this); }
1579  {
1580  mVec[0] += v[0];
1581  mVec[1] += v[1];
1582  mVec[2] += v[2];
1583  return *this;
1584  }
1586  {
1587  mVec[0] += T(ijk[0]);
1588  mVec[1] += T(ijk[1]);
1589  mVec[2] += T(ijk[2]);
1590  return *this;
1591  }
1593  {
1594  mVec[0] -= v[0];
1595  mVec[1] -= v[1];
1596  mVec[2] -= v[2];
1597  return *this;
1598  }
1600  {
1601  mVec[0] -= T(ijk[0]);
1602  mVec[1] -= T(ijk[1]);
1603  mVec[2] -= T(ijk[2]);
1604  return *this;
1605  }
1607  {
1608  mVec[0] *= s;
1609  mVec[1] *= s;
1610  mVec[2] *= s;
1611  return *this;
1612  }
1613  __hostdev__ Vec3& operator/=(const T& s) { return (*this) *= T(1) / s; }
1614  __hostdev__ Vec3& normalize() { return (*this) /= this->length(); }
1615  /// @brief Perform a component-wise minimum with the other Coord.
1617  {
1618  if (other[0] < mVec[0])
1619  mVec[0] = other[0];
1620  if (other[1] < mVec[1])
1621  mVec[1] = other[1];
1622  if (other[2] < mVec[2])
1623  mVec[2] = other[2];
1624  return *this;
1625  }
1626 
1627  /// @brief Perform a component-wise maximum with the other Coord.
1629  {
1630  if (other[0] > mVec[0])
1631  mVec[0] = other[0];
1632  if (other[1] > mVec[1])
1633  mVec[1] = other[1];
1634  if (other[2] > mVec[2])
1635  mVec[2] = other[2];
1636  return *this;
1637  }
1638  /// @brief Return the smallest vector component
1640  {
1641  return mVec[0] < mVec[1] ? (mVec[0] < mVec[2] ? mVec[0] : mVec[2]) : (mVec[1] < mVec[2] ? mVec[1] : mVec[2]);
1642  }
1643  /// @brief Return the largest vector component
1645  {
1646  return mVec[0] > mVec[1] ? (mVec[0] > mVec[2] ? mVec[0] : mVec[2]) : (mVec[1] > mVec[2] ? mVec[1] : mVec[2]);
1647  }
1648  /// @brief Round each component if this Vec<T> up to its integer value
1649  /// @return Return an integer Coord
1650  __hostdev__ Coord floor() const { return Coord(Floor(mVec[0]), Floor(mVec[1]), Floor(mVec[2])); }
1651  /// @brief Round each component if this Vec<T> down to its integer value
1652  /// @return Return an integer Coord
1653  __hostdev__ Coord ceil() const { return Coord(Ceil(mVec[0]), Ceil(mVec[1]), Ceil(mVec[2])); }
1654  /// @brief Round each component if this Vec<T> to its closest integer value
1655  /// @return Return an integer Coord
1657  {
1658  if constexpr(is_same<T, float>::value) {
1659  return Coord(Floor(mVec[0] + 0.5f), Floor(mVec[1] + 0.5f), Floor(mVec[2] + 0.5f));
1660  } else if constexpr(is_same<T, int>::value) {
1661  return Coord(mVec[0], mVec[1], mVec[2]);
1662  } else {
1663  return Coord(Floor(mVec[0] + 0.5), Floor(mVec[1] + 0.5), Floor(mVec[2] + 0.5));
1664  }
1665  }
1666 
1667  /// @brief return a non-const raw constant pointer to array of three vector components
1668  __hostdev__ T* asPointer() { return mVec; }
1669  /// @brief return a const raw constant pointer to array of three vector components
1670  __hostdev__ const T* asPointer() const { return mVec; }
1671 }; // Vec3<T>
1672 
1673 template<typename T1, typename T2>
1674 __hostdev__ inline Vec3<T2> operator*(T1 scalar, const Vec3<T2>& vec)
1675 {
1676  return Vec3<T2>(scalar * vec[0], scalar * vec[1], scalar * vec[2]);
1677 }
1678 template<typename T1, typename T2>
1679 __hostdev__ inline Vec3<T2> operator/(T1 scalar, const Vec3<T2>& vec)
1680 {
1681  return Vec3<T2>(scalar / vec[0], scalar / vec[1], scalar / vec[2]);
1682 }
1683 
1684 //using Vec3R = Vec3<double>;// deprecated
1691 
1692 /// @brief Return a single precision floating-point vector of this coordinate
1694 {
1695  return Vec3f(float(mVec[0]), float(mVec[1]), float(mVec[2]));
1696 }
1697 
1698 /// @brief Return a double precision floating-point vector of this coordinate
1700 {
1701  return Vec3d(double(mVec[0]), double(mVec[1]), double(mVec[2]));
1702 }
1703 
1704 // ----------------------------> Vec4 <--------------------------------------
1705 
1706 /// @brief A simple vector class with four components, similar to openvdb::math::Vec4
1707 template<typename T>
1708 class Vec4
1709 {
1710  T mVec[4];
1711 
1712 public:
1713  static const int SIZE = 4;
1714  static const int size = 4;
1715  using ValueType = T;
1716  Vec4() = default;
1717  __hostdev__ explicit Vec4(T x)
1718  : mVec{x, x, x, x}
1719  {
1720  }
1721  __hostdev__ Vec4(T x, T y, T z, T w)
1722  : mVec{x, y, z, w}
1723  {
1724  }
1725  template<typename T2>
1726  __hostdev__ explicit Vec4(const Vec4<T2>& v)
1727  : mVec{T(v[0]), T(v[1]), T(v[2]), T(v[3])}
1728  {
1729  }
1730  template<template<class> class Vec4T, class T2>
1731  __hostdev__ Vec4(const Vec4T<T2>& v)
1732  : mVec{T(v[0]), T(v[1]), T(v[2]), T(v[3])}
1733  {
1734  static_assert(Vec4T<T2>::size == size, "expected Vec4T::size==4!");
1735  }
1736  __hostdev__ bool operator==(const Vec4& rhs) const { return mVec[0] == rhs[0] && mVec[1] == rhs[1] && mVec[2] == rhs[2] && mVec[3] == rhs[3]; }
1737  __hostdev__ bool operator!=(const Vec4& rhs) const { return mVec[0] != rhs[0] || mVec[1] != rhs[1] || mVec[2] != rhs[2] || mVec[3] != rhs[3]; }
1738  template<template<class> class Vec4T, class T2>
1739  __hostdev__ Vec4& operator=(const Vec4T<T2>& rhs)
1740  {
1741  static_assert(Vec4T<T2>::size == size, "expected Vec4T::size==4!");
1742  mVec[0] = rhs[0];
1743  mVec[1] = rhs[1];
1744  mVec[2] = rhs[2];
1745  mVec[3] = rhs[3];
1746  return *this;
1747  }
1748 
1749  __hostdev__ const T& operator[](int i) const { return mVec[i]; }
1750  __hostdev__ T& operator[](int i) { return mVec[i]; }
1751  template<typename Vec4T>
1752  __hostdev__ T dot(const Vec4T& v) const { return mVec[0] * v[0] + mVec[1] * v[1] + mVec[2] * v[2] + mVec[3] * v[3]; }
1754  {
1755  return mVec[0] * mVec[0] + mVec[1] * mVec[1] + mVec[2] * mVec[2] + mVec[3] * mVec[3]; // 7 flops
1756  }
1757  __hostdev__ T length() const { return Sqrt(this->lengthSqr()); }
1758  __hostdev__ Vec4 operator-() const { return Vec4(-mVec[0], -mVec[1], -mVec[2], -mVec[3]); }
1759  __hostdev__ Vec4 operator*(const Vec4& v) const { return Vec4(mVec[0] * v[0], mVec[1] * v[1], mVec[2] * v[2], mVec[3] * v[3]); }
1760  __hostdev__ Vec4 operator/(const Vec4& v) const { return Vec4(mVec[0] / v[0], mVec[1] / v[1], mVec[2] / v[2], mVec[3] / v[3]); }
1761  __hostdev__ Vec4 operator+(const Vec4& v) const { return Vec4(mVec[0] + v[0], mVec[1] + v[1], mVec[2] + v[2], mVec[3] + v[3]); }
1762  __hostdev__ Vec4 operator-(const Vec4& v) const { return Vec4(mVec[0] - v[0], mVec[1] - v[1], mVec[2] - v[2], mVec[3] - v[3]); }
1763  __hostdev__ Vec4 operator*(const T& s) const { return Vec4(s * mVec[0], s * mVec[1], s * mVec[2], s * mVec[3]); }
1764  __hostdev__ Vec4 operator/(const T& s) const { return (T(1) / s) * (*this); }
1766  {
1767  mVec[0] += v[0];
1768  mVec[1] += v[1];
1769  mVec[2] += v[2];
1770  mVec[3] += v[3];
1771  return *this;
1772  }
1774  {
1775  mVec[0] -= v[0];
1776  mVec[1] -= v[1];
1777  mVec[2] -= v[2];
1778  mVec[3] -= v[3];
1779  return *this;
1780  }
1782  {
1783  mVec[0] *= s;
1784  mVec[1] *= s;
1785  mVec[2] *= s;
1786  mVec[3] *= s;
1787  return *this;
1788  }
1789  __hostdev__ Vec4& operator/=(const T& s) { return (*this) *= T(1) / s; }
1790  __hostdev__ Vec4& normalize() { return (*this) /= this->length(); }
1791  /// @brief Perform a component-wise minimum with the other Coord.
1793  {
1794  if (other[0] < mVec[0])
1795  mVec[0] = other[0];
1796  if (other[1] < mVec[1])
1797  mVec[1] = other[1];
1798  if (other[2] < mVec[2])
1799  mVec[2] = other[2];
1800  if (other[3] < mVec[3])
1801  mVec[3] = other[3];
1802  return *this;
1803  }
1804 
1805  /// @brief Perform a component-wise maximum with the other Coord.
1807  {
1808  if (other[0] > mVec[0])
1809  mVec[0] = other[0];
1810  if (other[1] > mVec[1])
1811  mVec[1] = other[1];
1812  if (other[2] > mVec[2])
1813  mVec[2] = other[2];
1814  if (other[3] > mVec[3])
1815  mVec[3] = other[3];
1816  return *this;
1817  }
1818 }; // Vec4<T>
1819 
1820 template<typename T1, typename T2>
1821 __hostdev__ inline Vec4<T2> operator*(T1 scalar, const Vec4<T2>& vec)
1822 {
1823  return Vec4<T2>(scalar * vec[0], scalar * vec[1], scalar * vec[2], scalar * vec[3]);
1824 }
1825 template<typename T1, typename T2>
1826 __hostdev__ inline Vec4<T2> operator/(T1 scalar, const Vec4<T2>& vec)
1827 {
1828  return Vec4<T2>(scalar / vec[0], scalar / vec[1], scalar / vec[2], scalar / vec[3]);
1829 }
1830 
1835 
1836 
1837 // --------------------------> Rgba8 <------------------------------------
1838 
1839 /// @brief 8-bit red, green, blue, alpha packed into 32 bit unsigned int
1840 class Rgba8
1841 {
1842  union
1843  {
1844  uint8_t c[4]; // 4 integer color channels of red, green, blue and alpha components.
1845  uint32_t packed; // 32 bit packed representation
1846  } mData;
1847 
1848 public:
1849  static const int SIZE = 4;
1850  using ValueType = uint8_t;
1851 
1852  /// @brief Default copy constructor
1853  Rgba8(const Rgba8&) = default;
1854 
1855  /// @brief Default move constructor
1856  Rgba8(Rgba8&&) = default;
1857 
1858  /// @brief Default move assignment operator
1859  /// @return non-const reference to this instance
1860  Rgba8& operator=(Rgba8&&) = default;
1861 
1862  /// @brief Default copy assignment operator
1863  /// @return non-const reference to this instance
1864  Rgba8& operator=(const Rgba8&) = default;
1865 
1866  /// @brief Default ctor initializes all channels to zero
1868  : mData{{0, 0, 0, 0}}
1869  {
1870  static_assert(sizeof(uint32_t) == sizeof(Rgba8), "Unexpected sizeof");
1871  }
1872 
1873  /// @brief integer r,g,b,a ctor where alpha channel defaults to opaque
1874  /// @note all values should be in the range 0u to 255u
1875  __hostdev__ Rgba8(uint8_t r, uint8_t g, uint8_t b, uint8_t a = 255u)
1876  : mData{{r, g, b, a}}
1877  {
1878  }
1879 
1880  /// @brief @brief ctor where all channels are initialized to the same value
1881  /// @note value should be in the range 0u to 255u
1882  explicit __hostdev__ Rgba8(uint8_t v)
1883  : mData{{v, v, v, v}}
1884  {
1885  }
1886 
1887  /// @brief floating-point r,g,b,a ctor where alpha channel defaults to opaque
1888  /// @note all values should be in the range 0.0f to 1.0f
1889  __hostdev__ Rgba8(float r, float g, float b, float a = 1.0f)
1890  : mData{{static_cast<uint8_t>(0.5f + r * 255.0f), // round floats to nearest integers
1891  static_cast<uint8_t>(0.5f + g * 255.0f), // double {{}} is needed due to union
1892  static_cast<uint8_t>(0.5f + b * 255.0f),
1893  static_cast<uint8_t>(0.5f + a * 255.0f)}}
1894  {
1895  }
1896 
1897  /// @brief Vec3f r,g,b ctor (alpha channel it set to 1)
1898  /// @note all values should be in the range 0.0f to 1.0f
1899  __hostdev__ Rgba8(const Vec3f& rgb)
1900  : Rgba8(rgb[0], rgb[1], rgb[2])
1901  {
1902  }
1903 
1904  /// @brief Vec4f r,g,b,a ctor
1905  /// @note all values should be in the range 0.0f to 1.0f
1906  __hostdev__ Rgba8(const Vec4f& rgba)
1907  : Rgba8(rgba[0], rgba[1], rgba[2], rgba[3])
1908  {
1909  }
1910 
1911  __hostdev__ bool operator< (const Rgba8& rhs) const { return mData.packed < rhs.mData.packed; }
1912  __hostdev__ bool operator==(const Rgba8& rhs) const { return mData.packed == rhs.mData.packed; }
1913  __hostdev__ float lengthSqr() const
1914  {
1915  return 0.0000153787005f * (float(mData.c[0]) * mData.c[0] +
1916  float(mData.c[1]) * mData.c[1] +
1917  float(mData.c[2]) * mData.c[2]); //1/255^2
1918  }
1919  __hostdev__ float length() const { return sqrtf(this->lengthSqr()); }
1920  /// @brief return n'th color channel as a float in the range 0 to 1
1921  __hostdev__ float asFloat(int n) const { return 0.003921569f*float(mData.c[n]); }// divide by 255
1922  __hostdev__ const uint8_t& operator[](int n) const { return mData.c[n]; }
1923  __hostdev__ uint8_t& operator[](int n) { return mData.c[n]; }
1924  __hostdev__ const uint32_t& packed() const { return mData.packed; }
1925  __hostdev__ uint32_t& packed() { return mData.packed; }
1926  __hostdev__ const uint8_t& r() const { return mData.c[0]; }
1927  __hostdev__ const uint8_t& g() const { return mData.c[1]; }
1928  __hostdev__ const uint8_t& b() const { return mData.c[2]; }
1929  __hostdev__ const uint8_t& a() const { return mData.c[3]; }
1930  __hostdev__ uint8_t& r() { return mData.c[0]; }
1931  __hostdev__ uint8_t& g() { return mData.c[1]; }
1932  __hostdev__ uint8_t& b() { return mData.c[2]; }
1933  __hostdev__ uint8_t& a() { return mData.c[3]; }
1934  __hostdev__ operator Vec3f() const {
1935  return Vec3f(this->asFloat(0), this->asFloat(1), this->asFloat(2));
1936  }
1937  __hostdev__ operator Vec4f() const {
1938  return Vec4f(this->asFloat(0), this->asFloat(1), this->asFloat(2), this->asFloat(3));
1939  }
1940 }; // Rgba8
1941 
1942 using PackedRGBA8 = Rgba8; // for backwards compatibility
1943 
1944 // ----------------------------> TensorTraits <--------------------------------------
1945 
1948 
1949 template<typename T>
1950 struct TensorTraits<T, 0>
1951 {
1952  static const int Rank = 0; // i.e. scalar
1953  static const bool IsScalar = true;
1954  static const bool IsVector = false;
1955  static const int Size = 1;
1956  using ElementType = T;
1957  static T scalar(const T& s) { return s; }
1958 };
1959 
1960 template<typename T>
1961 struct TensorTraits<T, 1>
1962 {
1963  static const int Rank = 1; // i.e. vector
1964  static const bool IsScalar = false;
1965  static const bool IsVector = true;
1966  static const int Size = T::SIZE;
1967  using ElementType = typename T::ValueType;
1968  static ElementType scalar(const T& v) { return v.length(); }
1969 };
1970 
1971 // ----------------------------> FloatTraits <--------------------------------------
1972 
1973 template<typename T, int = sizeof(typename TensorTraits<T>::ElementType)>
1975 {
1976  using FloatType = float;
1977 };
1978 
1979 template<typename T>
1980 struct FloatTraits<T, 8>
1981 {
1982  using FloatType = double;
1983 };
1984 
1985 template<>
1986 struct FloatTraits<bool, 1>
1987 {
1988  using FloatType = bool;
1989 };
1990 
1991 template<>
1992 struct FloatTraits<ValueIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
1993 {
1994  using FloatType = uint64_t;
1995 };
1996 
1997 template<>
1998 struct FloatTraits<ValueIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
1999 {
2000  using FloatType = uint64_t;
2001 };
2002 
2003 template<>
2004 struct FloatTraits<ValueOnIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
2005 {
2006  using FloatType = uint64_t;
2007 };
2008 
2009 template<>
2010 struct FloatTraits<ValueOnIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
2011 {
2012  using FloatType = uint64_t;
2013 };
2014 
2015 template<>
2016 struct FloatTraits<ValueMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
2017 {
2018  using FloatType = bool;
2019 };
2020 
2021 template<>
2022 struct FloatTraits<Point, 1> // size of empty class in C++ is 1 byte and not 0 byte
2023 {
2024  using FloatType = double;
2025 };
2026 
2027 // ----------------------------> mapping BuildType -> GridType <--------------------------------------
2028 
2029 /// @brief Maps from a templated build type to a GridType enum
2030 template<typename BuildT>
2032 {
2033  if constexpr(is_same<BuildT, float>::value) { // resolved at compile-time
2034  return GridType::Float;
2035  } else if constexpr(is_same<BuildT, double>::value) {
2036  return GridType::Double;
2037  } else if constexpr(is_same<BuildT, int16_t>::value) {
2038  return GridType::Int16;
2039  } else if constexpr(is_same<BuildT, int32_t>::value) {
2040  return GridType::Int32;
2041  } else if constexpr(is_same<BuildT, int64_t>::value) {
2042  return GridType::Int64;
2043  } else if constexpr(is_same<BuildT, Vec3f>::value) {
2044  return GridType::Vec3f;
2045  } else if constexpr(is_same<BuildT, Vec3d>::value) {
2046  return GridType::Vec3d;
2047  } else if constexpr(is_same<BuildT, uint32_t>::value) {
2048  return GridType::UInt32;
2049  } else if constexpr(is_same<BuildT, ValueMask>::value) {
2050  return GridType::Mask;
2051  } else if constexpr(is_same<BuildT, Half>::value) {
2052  return GridType::Half;
2053  } else if constexpr(is_same<BuildT, ValueIndex>::value) {
2054  return GridType::Index;
2055  } else if constexpr(is_same<BuildT, ValueOnIndex>::value) {
2056  return GridType::OnIndex;
2057  } else if constexpr(is_same<BuildT, ValueIndexMask>::value) {
2058  return GridType::IndexMask;
2059  } else if constexpr(is_same<BuildT, ValueOnIndexMask>::value) {
2060  return GridType::OnIndexMask;
2061  } else if constexpr(is_same<BuildT, bool>::value) {
2062  return GridType::Boolean;
2063  } else if constexpr(is_same<BuildT, Rgba8>::value) {
2064  return GridType::RGBA8;
2065  } else if (is_same<BuildT, Fp4>::value) {
2066  return GridType::Fp4;
2067  } else if constexpr(is_same<BuildT, Fp8>::value) {
2068  return GridType::Fp8;
2069  } else if constexpr(is_same<BuildT, Fp16>::value) {
2070  return GridType::Fp16;
2071  } else if constexpr(is_same<BuildT, FpN>::value) {
2072  return GridType::FpN;
2073  } else if constexpr(is_same<BuildT, Vec4f>::value) {
2074  return GridType::Vec4f;
2075  } else if constexpr(is_same<BuildT, Vec4d>::value) {
2076  return GridType::Vec4d;
2077  } else if (is_same<BuildT, Point>::value) {
2078  return GridType::PointIndex;
2079  } else if constexpr(is_same<BuildT, Vec3u8>::value) {
2080  return GridType::Vec3u8;
2081  } else if constexpr(is_same<BuildT, Vec3u16>::value) {
2082  return GridType::Vec3u16;
2083  }
2084  return GridType::Unknown;
2085 }
2086 
2087 // ----------------------------> mapping BuildType -> GridClass <--------------------------------------
2088 
2089 /// @brief Maps from a templated build type to a GridClass enum
2090 template<typename BuildT>
2092 {
2094  return GridClass::Topology;
2095  } else if (BuildTraits<BuildT>::is_index) {
2096  return GridClass::IndexGrid;
2097  } else if (is_same<BuildT, Rgba8>::value) {
2098  return GridClass::VoxelVolume;
2099  } else if (is_same<BuildT, Point>::value) {
2100  return GridClass::PointIndex;
2101  }
2102  return defaultClass;
2103 }
2104 
2105 // ----------------------------> matMult <--------------------------------------
2106 
2107 /// @brief Multiply a 3x3 matrix and a 3d vector using 32bit floating point arithmetics
2108 /// @note This corresponds to a linear mapping, e.g. scaling, rotation etc.
2109 /// @tparam Vec3T Template type of the input and output 3d vectors
2110 /// @param mat pointer to an array of floats with the 3x3 matrix
2111 /// @param xyz input vector to be multiplied by the matrix
2112 /// @return result of matrix-vector multiplication, i.e. mat x xyz
2113 template<typename Vec3T>
2114 __hostdev__ inline Vec3T matMult(const float* mat, const Vec3T& xyz)
2115 {
2116  return Vec3T(fmaf(static_cast<float>(xyz[0]), mat[0], fmaf(static_cast<float>(xyz[1]), mat[1], static_cast<float>(xyz[2]) * mat[2])),
2117  fmaf(static_cast<float>(xyz[0]), mat[3], fmaf(static_cast<float>(xyz[1]), mat[4], static_cast<float>(xyz[2]) * mat[5])),
2118  fmaf(static_cast<float>(xyz[0]), mat[6], fmaf(static_cast<float>(xyz[1]), mat[7], static_cast<float>(xyz[2]) * mat[8]))); // 6 fmaf + 3 mult = 9 flops
2119 }
2120 
2121 /// @brief Multiply a 3x3 matrix and a 3d vector using 64bit floating point arithmetics
2122 /// @note This corresponds to a linear mapping, e.g. scaling, rotation etc.
2123 /// @tparam Vec3T Template type of the input and output 3d vectors
2124 /// @param mat pointer to an array of floats with the 3x3 matrix
2125 /// @param xyz input vector to be multiplied by the matrix
2126 /// @return result of matrix-vector multiplication, i.e. mat x xyz
2127 template<typename Vec3T>
2128 __hostdev__ inline Vec3T matMult(const double* mat, const Vec3T& xyz)
2129 {
2130  return Vec3T(fma(static_cast<double>(xyz[0]), mat[0], fma(static_cast<double>(xyz[1]), mat[1], static_cast<double>(xyz[2]) * mat[2])),
2131  fma(static_cast<double>(xyz[0]), mat[3], fma(static_cast<double>(xyz[1]), mat[4], static_cast<double>(xyz[2]) * mat[5])),
2132  fma(static_cast<double>(xyz[0]), mat[6], fma(static_cast<double>(xyz[1]), mat[7], static_cast<double>(xyz[2]) * mat[8]))); // 6 fmaf + 3 mult = 9 flops
2133 }
2134 
2135 /// @brief Multiply a 3x3 matrix to a 3d vector and add another 3d vector using 32bit floating point arithmetics
2136 /// @note This corresponds to an affine transformation, i.e a linear mapping followed by a translation. e.g. scale/rotation and translation
2137 /// @tparam Vec3T Template type of the input and output 3d vectors
2138 /// @param mat pointer to an array of floats with the 3x3 matrix
2139 /// @param vec 3d vector to be added AFTER the matrix multiplication
2140 /// @param xyz input vector to be multiplied by the matrix and a translated by @c vec
2141 /// @return result of affine transformation, i.e. (mat x xyz) + vec
2142 template<typename Vec3T>
2143 __hostdev__ inline Vec3T matMult(const float* mat, const float* vec, const Vec3T& xyz)
2144 {
2145  return Vec3T(fmaf(static_cast<float>(xyz[0]), mat[0], fmaf(static_cast<float>(xyz[1]), mat[1], fmaf(static_cast<float>(xyz[2]), mat[2], vec[0]))),
2146  fmaf(static_cast<float>(xyz[0]), mat[3], fmaf(static_cast<float>(xyz[1]), mat[4], fmaf(static_cast<float>(xyz[2]), mat[5], vec[1]))),
2147  fmaf(static_cast<float>(xyz[0]), mat[6], fmaf(static_cast<float>(xyz[1]), mat[7], fmaf(static_cast<float>(xyz[2]), mat[8], vec[2])))); // 9 fmaf = 9 flops
2148 }
2149 
2150 /// @brief Multiply a 3x3 matrix to a 3d vector and add another 3d vector using 64bit floating point arithmetics
2151 /// @note This corresponds to an affine transformation, i.e a linear mapping followed by a translation. e.g. scale/rotation and translation
2152 /// @tparam Vec3T Template type of the input and output 3d vectors
2153 /// @param mat pointer to an array of floats with the 3x3 matrix
2154 /// @param vec 3d vector to be added AFTER the matrix multiplication
2155 /// @param xyz input vector to be multiplied by the matrix and a translated by @c vec
2156 /// @return result of affine transformation, i.e. (mat x xyz) + vec
2157 template<typename Vec3T>
2158 __hostdev__ inline Vec3T matMult(const double* mat, const double* vec, const Vec3T& xyz)
2159 {
2160  return Vec3T(fma(static_cast<double>(xyz[0]), mat[0], fma(static_cast<double>(xyz[1]), mat[1], fma(static_cast<double>(xyz[2]), mat[2], vec[0]))),
2161  fma(static_cast<double>(xyz[0]), mat[3], fma(static_cast<double>(xyz[1]), mat[4], fma(static_cast<double>(xyz[2]), mat[5], vec[1]))),
2162  fma(static_cast<double>(xyz[0]), mat[6], fma(static_cast<double>(xyz[1]), mat[7], fma(static_cast<double>(xyz[2]), mat[8], vec[2])))); // 9 fma = 9 flops
2163 }
2164 
2165 /// @brief Multiply the transposed of a 3x3 matrix and a 3d vector using 32bit floating point arithmetics
2166 /// @note This corresponds to an inverse linear mapping, e.g. inverse scaling, inverse rotation etc.
2167 /// @tparam Vec3T Template type of the input and output 3d vectors
2168 /// @param mat pointer to an array of floats with the 3x3 matrix
2169 /// @param xyz input vector to be multiplied by the transposed matrix
2170 /// @return result of matrix-vector multiplication, i.e. mat^T x xyz
2171 template<typename Vec3T>
2172 __hostdev__ inline Vec3T matMultT(const float* mat, const Vec3T& xyz)
2173 {
2174  return Vec3T(fmaf(static_cast<float>(xyz[0]), mat[0], fmaf(static_cast<float>(xyz[1]), mat[3], static_cast<float>(xyz[2]) * mat[6])),
2175  fmaf(static_cast<float>(xyz[0]), mat[1], fmaf(static_cast<float>(xyz[1]), mat[4], static_cast<float>(xyz[2]) * mat[7])),
2176  fmaf(static_cast<float>(xyz[0]), mat[2], fmaf(static_cast<float>(xyz[1]), mat[5], static_cast<float>(xyz[2]) * mat[8]))); // 6 fmaf + 3 mult = 9 flops
2177 }
2178 
2179 /// @brief Multiply the transposed of a 3x3 matrix and a 3d vector using 64bit floating point arithmetics
2180 /// @note This corresponds to an inverse linear mapping, e.g. inverse scaling, inverse rotation etc.
2181 /// @tparam Vec3T Template type of the input and output 3d vectors
2182 /// @param mat pointer to an array of floats with the 3x3 matrix
2183 /// @param xyz input vector to be multiplied by the transposed matrix
2184 /// @return result of matrix-vector multiplication, i.e. mat^T x xyz
2185 template<typename Vec3T>
2186 __hostdev__ inline Vec3T matMultT(const double* mat, const Vec3T& xyz)
2187 {
2188  return Vec3T(fma(static_cast<double>(xyz[0]), mat[0], fma(static_cast<double>(xyz[1]), mat[3], static_cast<double>(xyz[2]) * mat[6])),
2189  fma(static_cast<double>(xyz[0]), mat[1], fma(static_cast<double>(xyz[1]), mat[4], static_cast<double>(xyz[2]) * mat[7])),
2190  fma(static_cast<double>(xyz[0]), mat[2], fma(static_cast<double>(xyz[1]), mat[5], static_cast<double>(xyz[2]) * mat[8]))); // 6 fmaf + 3 mult = 9 flops
2191 }
2192 
2193 template<typename Vec3T>
2194 __hostdev__ inline Vec3T matMultT(const float* mat, const float* vec, const Vec3T& xyz)
2195 {
2196  return Vec3T(fmaf(static_cast<float>(xyz[0]), mat[0], fmaf(static_cast<float>(xyz[1]), mat[3], fmaf(static_cast<float>(xyz[2]), mat[6], vec[0]))),
2197  fmaf(static_cast<float>(xyz[0]), mat[1], fmaf(static_cast<float>(xyz[1]), mat[4], fmaf(static_cast<float>(xyz[2]), mat[7], vec[1]))),
2198  fmaf(static_cast<float>(xyz[0]), mat[2], fmaf(static_cast<float>(xyz[1]), mat[5], fmaf(static_cast<float>(xyz[2]), mat[8], vec[2])))); // 9 fmaf = 9 flops
2199 }
2200 
2201 template<typename Vec3T>
2202 __hostdev__ inline Vec3T matMultT(const double* mat, const double* vec, const Vec3T& xyz)
2203 {
2204  return Vec3T(fma(static_cast<double>(xyz[0]), mat[0], fma(static_cast<double>(xyz[1]), mat[3], fma(static_cast<double>(xyz[2]), mat[6], vec[0]))),
2205  fma(static_cast<double>(xyz[0]), mat[1], fma(static_cast<double>(xyz[1]), mat[4], fma(static_cast<double>(xyz[2]), mat[7], vec[1]))),
2206  fma(static_cast<double>(xyz[0]), mat[2], fma(static_cast<double>(xyz[1]), mat[5], fma(static_cast<double>(xyz[2]), mat[8], vec[2])))); // 9 fma = 9 flops
2207 }
2208 
2209 // ----------------------------> BBox <-------------------------------------
2210 
2211 // Base-class for static polymorphism (cannot be constructed directly)
2212 template<typename Vec3T>
2213 struct BaseBBox
2214 {
2215  Vec3T mCoord[2];
2216  __hostdev__ bool operator==(const BaseBBox& rhs) const { return mCoord[0] == rhs.mCoord[0] && mCoord[1] == rhs.mCoord[1]; };
2217  __hostdev__ bool operator!=(const BaseBBox& rhs) const { return mCoord[0] != rhs.mCoord[0] || mCoord[1] != rhs.mCoord[1]; };
2218  __hostdev__ const Vec3T& operator[](int i) const { return mCoord[i]; }
2219  __hostdev__ Vec3T& operator[](int i) { return mCoord[i]; }
2220  __hostdev__ Vec3T& min() { return mCoord[0]; }
2221  __hostdev__ Vec3T& max() { return mCoord[1]; }
2222  __hostdev__ const Vec3T& min() const { return mCoord[0]; }
2223  __hostdev__ const Vec3T& max() const { return mCoord[1]; }
2224  __hostdev__ BaseBBox& translate(const Vec3T& xyz)
2225  {
2226  mCoord[0] += xyz;
2227  mCoord[1] += xyz;
2228  return *this;
2229  }
2230  /// @brief Expand this bounding box to enclose point @c xyz.
2231  __hostdev__ BaseBBox& expand(const Vec3T& xyz)
2232  {
2233  mCoord[0].minComponent(xyz);
2234  mCoord[1].maxComponent(xyz);
2235  return *this;
2236  }
2237 
2238  /// @brief Expand this bounding box to enclose the given bounding box.
2240  {
2241  mCoord[0].minComponent(bbox[0]);
2242  mCoord[1].maxComponent(bbox[1]);
2243  return *this;
2244  }
2245 
2246  /// @brief Intersect this bounding box with the given bounding box.
2248  {
2249  mCoord[0].maxComponent(bbox[0]);
2250  mCoord[1].minComponent(bbox[1]);
2251  return *this;
2252  }
2253 
2254  //__hostdev__ BaseBBox expandBy(typename Vec3T::ValueType padding) const
2255  //{
2256  // return BaseBBox(mCoord[0].offsetBy(-padding),mCoord[1].offsetBy(padding));
2257  //}
2258  __hostdev__ bool isInside(const Vec3T& xyz)
2259  {
2260  if (xyz[0] < mCoord[0][0] || xyz[1] < mCoord[0][1] || xyz[2] < mCoord[0][2])
2261  return false;
2262  if (xyz[0] > mCoord[1][0] || xyz[1] > mCoord[1][1] || xyz[2] > mCoord[1][2])
2263  return false;
2264  return true;
2265  }
2266 
2267 protected:
2269  __hostdev__ BaseBBox(const Vec3T& min, const Vec3T& max)
2270  : mCoord{min, max}
2271  {
2272  }
2273 }; // BaseBBox
2274 
2276 struct BBox;
2277 
2278 /// @brief Partial template specialization for floating point coordinate types.
2279 ///
2280 /// @note Min is inclusive and max is exclusive. If min = max the dimension of
2281 /// the bounding box is zero and therefore it is also empty.
2282 template<typename Vec3T>
2283 struct BBox<Vec3T, true> : public BaseBBox<Vec3T>
2284 {
2285  using Vec3Type = Vec3T;
2286  using ValueType = typename Vec3T::ValueType;
2287  static_assert(is_floating_point<ValueType>::value, "Expected a floating point coordinate type");
2289  using BaseT::mCoord;
2290  /// @brief Default construction sets BBox to an empty bbox
2292  : BaseT(Vec3T( Maximum<typename Vec3T::ValueType>::value()),
2293  Vec3T(-Maximum<typename Vec3T::ValueType>::value()))
2294  {
2295  }
2296  __hostdev__ BBox(const Vec3T& min, const Vec3T& max)
2297  : BaseT(min, max)
2298  {
2299  }
2300  __hostdev__ BBox(const Coord& min, const Coord& max)
2301  : BaseT(Vec3T(ValueType(min[0]), ValueType(min[1]), ValueType(min[2])),
2302  Vec3T(ValueType(max[0] + 1), ValueType(max[1] + 1), ValueType(max[2] + 1)))
2303  {
2304  }
2305  __hostdev__ static BBox createCube(const Coord& min, typename Coord::ValueType dim)
2306  {
2307  return BBox(min, min.offsetBy(dim));
2308  }
2309 
2311  : BBox(bbox[0], bbox[1])
2312  {
2313  }
2314  __hostdev__ bool empty() const { return mCoord[0][0] >= mCoord[1][0] ||
2315  mCoord[0][1] >= mCoord[1][1] ||
2316  mCoord[0][2] >= mCoord[1][2]; }
2317  __hostdev__ operator bool() const { return mCoord[0][0] < mCoord[1][0] &&
2318  mCoord[0][1] < mCoord[1][1] &&
2319  mCoord[0][2] < mCoord[1][2]; }
2320  __hostdev__ Vec3T dim() const { return *this ? this->max() - this->min() : Vec3T(0); }
2321  __hostdev__ bool isInside(const Vec3T& p) const
2322  {
2323  return p[0] > mCoord[0][0] && p[1] > mCoord[0][1] && p[2] > mCoord[0][2] &&
2324  p[0] < mCoord[1][0] && p[1] < mCoord[1][1] && p[2] < mCoord[1][2];
2325  }
2326 
2327 }; // BBox<Vec3T, true>
2328 
2329 /// @brief Partial template specialization for integer coordinate types
2330 ///
2331 /// @note Both min and max are INCLUDED in the bbox so dim = max - min + 1. So,
2332 /// if min = max the bounding box contains exactly one point and dim = 1!
2333 template<typename CoordT>
2334 struct BBox<CoordT, false> : public BaseBBox<CoordT>
2335 {
2336  static_assert(is_same<int, typename CoordT::ValueType>::value, "Expected \"int\" coordinate type");
2338  using BaseT::mCoord;
2339  /// @brief Iterator over the domain covered by a BBox
2340  /// @details z is the fastest-moving coordinate.
2341  class Iterator
2342  {
2343  const BBox& mBBox;
2344  CoordT mPos;
2345 
2346  public:
2348  : mBBox(b)
2349  , mPos(b.min())
2350  {
2351  }
2352  __hostdev__ Iterator(const BBox& b, const Coord& p)
2353  : mBBox(b)
2354  , mPos(p)
2355  {
2356  }
2358  {
2359  if (mPos[2] < mBBox[1][2]) { // this is the most common case
2360  ++mPos[2];// increment z
2361  } else if (mPos[1] < mBBox[1][1]) {
2362  mPos[2] = mBBox[0][2];// reset z
2363  ++mPos[1];// increment y
2364  } else if (mPos[0] <= mBBox[1][0]) {
2365  mPos[2] = mBBox[0][2];// reset z
2366  mPos[1] = mBBox[0][1];// reset y
2367  ++mPos[0];// increment x
2368  }
2369  return *this;
2370  }
2371  __hostdev__ Iterator operator++(int)
2372  {
2373  auto tmp = *this;
2374  ++(*this);
2375  return tmp;
2376  }
2377  __hostdev__ bool operator==(const Iterator& rhs) const
2378  {
2379  NANOVDB_ASSERT(mBBox == rhs.mBBox);
2380  return mPos == rhs.mPos;
2381  }
2382  __hostdev__ bool operator!=(const Iterator& rhs) const
2383  {
2384  NANOVDB_ASSERT(mBBox == rhs.mBBox);
2385  return mPos != rhs.mPos;
2386  }
2387  __hostdev__ bool operator<(const Iterator& rhs) const
2388  {
2389  NANOVDB_ASSERT(mBBox == rhs.mBBox);
2390  return mPos < rhs.mPos;
2391  }
2392  __hostdev__ bool operator<=(const Iterator& rhs) const
2393  {
2394  NANOVDB_ASSERT(mBBox == rhs.mBBox);
2395  return mPos <= rhs.mPos;
2396  }
2397  /// @brief Return @c true if the iterator still points to a valid coordinate.
2398  __hostdev__ operator bool() const { return mPos <= mBBox[1]; }
2399  __hostdev__ const CoordT& operator*() const { return mPos; }
2400  }; // Iterator
2401  __hostdev__ Iterator begin() const { return Iterator{*this}; }
2402  __hostdev__ Iterator end() const { return Iterator{*this, CoordT(mCoord[1][0]+1, mCoord[0][1], mCoord[0][2])}; }
2404  : BaseT(CoordT::max(), CoordT::min())
2405  {
2406  }
2407  __hostdev__ BBox(const CoordT& min, const CoordT& max)
2408  : BaseT(min, max)
2409  {
2410  }
2411 
2412  template<typename SplitT>
2413  __hostdev__ BBox(BBox& other, const SplitT&)
2414  : BaseT(other.mCoord[0], other.mCoord[1])
2415  {
2416  NANOVDB_ASSERT(this->is_divisible());
2417  const int n = MaxIndex(this->dim());
2418  mCoord[1][n] = (mCoord[0][n] + mCoord[1][n]) >> 1;
2419  other.mCoord[0][n] = mCoord[1][n] + 1;
2420  }
2421 
2422  __hostdev__ static BBox createCube(const CoordT& min, typename CoordT::ValueType dim)
2423  {
2424  return BBox(min, min.offsetBy(dim - 1));
2425  }
2426 
2428  {
2429  return BBox(CoordT(min), CoordT(max));
2430  }
2431 
2432  __hostdev__ bool is_divisible() const { return mCoord[0][0] < mCoord[1][0] &&
2433  mCoord[0][1] < mCoord[1][1] &&
2434  mCoord[0][2] < mCoord[1][2]; }
2435  /// @brief Return true if this bounding box is empty, e.g. uninitialized
2436  __hostdev__ bool empty() const { return mCoord[0][0] > mCoord[1][0] ||
2437  mCoord[0][1] > mCoord[1][1] ||
2438  mCoord[0][2] > mCoord[1][2]; }
2439  /// @brief Convert this BBox to boolean true if it is not empty
2440  __hostdev__ operator bool() const { return mCoord[0][0] <= mCoord[1][0] &&
2441  mCoord[0][1] <= mCoord[1][1] &&
2442  mCoord[0][2] <= mCoord[1][2]; }
2443  __hostdev__ CoordT dim() const { return *this ? this->max() - this->min() + Coord(1) : Coord(0); }
2444  __hostdev__ uint64_t volume() const
2445  {
2446  auto d = this->dim();
2447  return uint64_t(d[0]) * uint64_t(d[1]) * uint64_t(d[2]);
2448  }
2449  __hostdev__ bool isInside(const CoordT& p) const { return !(CoordT::lessThan(p, this->min()) || CoordT::lessThan(this->max(), p)); }
2450  /// @brief Return @c true if the given bounding box is inside this bounding box.
2451  __hostdev__ bool isInside(const BBox& b) const
2452  {
2453  return !(CoordT::lessThan(b.min(), this->min()) || CoordT::lessThan(this->max(), b.max()));
2454  }
2455 
2456  /// @brief Return @c true if the given bounding box overlaps with this bounding box.
2457  __hostdev__ bool hasOverlap(const BBox& b) const
2458  {
2459  return !(CoordT::lessThan(this->max(), b.min()) || CoordT::lessThan(b.max(), this->min()));
2460  }
2461 
2462  /// @warning This converts a CoordBBox into a floating-point bounding box which implies that max += 1 !
2463  template<typename RealT = double>
2465  {
2466  static_assert(is_floating_point<RealT>::value, "CoordBBox::asReal: Expected a floating point coordinate");
2467  return BBox<Vec3<RealT>>(Vec3<RealT>(RealT(mCoord[0][0]), RealT(mCoord[0][1]), RealT(mCoord[0][2])),
2468  Vec3<RealT>(RealT(mCoord[1][0] + 1), RealT(mCoord[1][1] + 1), RealT(mCoord[1][2] + 1)));
2469  }
2470  /// @brief Return a new instance that is expanded by the specified padding.
2471  __hostdev__ BBox expandBy(typename CoordT::ValueType padding) const
2472  {
2473  return BBox(mCoord[0].offsetBy(-padding), mCoord[1].offsetBy(padding));
2474  }
2475 
2476  /// @brief @brief transform this coordinate bounding box by the specified map
2477  /// @param map mapping of index to world coordinates
2478  /// @return world bounding box
2479  template<typename Map>
2481  {
2482  const Vec3d tmp = map.applyMap(Vec3d(mCoord[0][0], mCoord[0][1], mCoord[0][2]));
2483  BBox<Vec3d> bbox(tmp, tmp);
2484  bbox.expand(map.applyMap(Vec3d(mCoord[0][0], mCoord[0][1], mCoord[1][2])));
2485  bbox.expand(map.applyMap(Vec3d(mCoord[0][0], mCoord[1][1], mCoord[0][2])));
2486  bbox.expand(map.applyMap(Vec3d(mCoord[1][0], mCoord[0][1], mCoord[0][2])));
2487  bbox.expand(map.applyMap(Vec3d(mCoord[1][0], mCoord[1][1], mCoord[0][2])));
2488  bbox.expand(map.applyMap(Vec3d(mCoord[1][0], mCoord[0][1], mCoord[1][2])));
2489  bbox.expand(map.applyMap(Vec3d(mCoord[0][0], mCoord[1][1], mCoord[1][2])));
2490  bbox.expand(map.applyMap(Vec3d(mCoord[1][0], mCoord[1][1], mCoord[1][2])));
2491  return bbox;
2492  }
2493 
2494 #if defined(__CUDACC__) // the following functions only run on the GPU!
2495  __device__ inline BBox& expandAtomic(const CoordT& ijk)
2496  {
2497  mCoord[0].minComponentAtomic(ijk);
2498  mCoord[1].maxComponentAtomic(ijk);
2499  return *this;
2500  }
2501  __device__ inline BBox& expandAtomic(const BBox& bbox)
2502  {
2503  mCoord[0].minComponentAtomic(bbox[0]);
2504  mCoord[1].maxComponentAtomic(bbox[1]);
2505  return *this;
2506  }
2507  __device__ inline BBox& intersectAtomic(const BBox& bbox)
2508  {
2509  mCoord[0].maxComponentAtomic(bbox[0]);
2510  mCoord[1].minComponentAtomic(bbox[1]);
2511  return *this;
2512  }
2513 #endif
2514 }; // BBox<CoordT, false>
2515 
2518 
2519 // -------------------> Find lowest and highest bit in a word <----------------------------
2520 
2521 /// @brief Returns the index of the lowest, i.e. least significant, on bit in the specified 32 bit word
2522 ///
2523 /// @warning Assumes that at least one bit is set in the word, i.e. @a v != uint32_t(0)!
2525 __hostdev__ static inline uint32_t FindLowestOn(uint32_t v)
2526 {
2527  NANOVDB_ASSERT(v);
2528 #if (defined(__CUDA_ARCH__) || defined(__HIP__)) && defined(NANOVDB_USE_INTRINSICS)
2529  return __ffs(v) - 1; // one based indexing
2530 #elif defined(_MSC_VER) && defined(NANOVDB_USE_INTRINSICS)
2531  unsigned long index;
2532  _BitScanForward(&index, v);
2533  return static_cast<uint32_t>(index);
2534 #elif (defined(__GNUC__) || defined(__clang__)) && defined(NANOVDB_USE_INTRINSICS)
2535  return static_cast<uint32_t>(__builtin_ctzl(v));
2536 #else
2537  //NANO_WARNING("Using software implementation for FindLowestOn(uint32_t v)")
2538  static const unsigned char DeBruijn[32] = {
2539  0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8, 31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9};
2540 // disable unary minus on unsigned warning
2541 #if defined(_MSC_VER) && !defined(__NVCC__)
2542 #pragma warning(push)
2543 #pragma warning(disable : 4146)
2544 #endif
2545  return DeBruijn[uint32_t((v & -v) * 0x077CB531U) >> 27];
2546 #if defined(_MSC_VER) && !defined(__NVCC__)
2547 #pragma warning(pop)
2548 #endif
2549 
2550 #endif
2551 }
2552 
2553 /// @brief Returns the index of the highest, i.e. most significant, on bit in the specified 32 bit word
2554 ///
2555 /// @warning Assumes that at least one bit is set in the word, i.e. @a v != uint32_t(0)!
2557 __hostdev__ static inline uint32_t FindHighestOn(uint32_t v)
2558 {
2559  NANOVDB_ASSERT(v);
2560 #if (defined(__CUDA_ARCH__) || defined(__HIP__)) && defined(NANOVDB_USE_INTRINSICS)
2561  return sizeof(uint32_t) * 8 - 1 - __clz(v); // Return the number of consecutive high-order zero bits in a 32-bit integer.
2562 #elif defined(_MSC_VER) && defined(NANOVDB_USE_INTRINSICS)
2563  unsigned long index;
2564  _BitScanReverse(&index, v);
2565  return static_cast<uint32_t>(index);
2566 #elif (defined(__GNUC__) || defined(__clang__)) && defined(NANOVDB_USE_INTRINSICS)
2567  return sizeof(unsigned long) * 8 - 1 - __builtin_clzl(v);
2568 #else
2569  //NANO_WARNING("Using software implementation for FindHighestOn(uint32_t)")
2570  static const unsigned char DeBruijn[32] = {
2571  0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
2572  8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31};
2573  v |= v >> 1; // first round down to one less than a power of 2
2574  v |= v >> 2;
2575  v |= v >> 4;
2576  v |= v >> 8;
2577  v |= v >> 16;
2578  return DeBruijn[uint32_t(v * 0x07C4ACDDU) >> 27];
2579 #endif
2580 }
2581 
2582 /// @brief Returns the index of the lowest, i.e. least significant, on bit in the specified 64 bit word
2583 ///
2584 /// @warning Assumes that at least one bit is set in the word, i.e. @a v != uint32_t(0)!
2586 __hostdev__ static inline uint32_t FindLowestOn(uint64_t v)
2587 {
2588  NANOVDB_ASSERT(v);
2589 #if (defined(__CUDA_ARCH__) || defined(__HIP__)) && defined(NANOVDB_USE_INTRINSICS)
2590  return __ffsll(static_cast<unsigned long long int>(v)) - 1; // one based indexing
2591 #elif defined(_MSC_VER) && defined(NANOVDB_USE_INTRINSICS)
2592  unsigned long index;
2593  _BitScanForward64(&index, v);
2594  return static_cast<uint32_t>(index);
2595 #elif (defined(__GNUC__) || defined(__clang__)) && defined(NANOVDB_USE_INTRINSICS)
2596  return static_cast<uint32_t>(__builtin_ctzll(v));
2597 #else
2598  //NANO_WARNING("Using software implementation for FindLowestOn(uint64_t)")
2599  static const unsigned char DeBruijn[64] = {
2600  0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48, 28,
2601  62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49, 18, 29, 11,
2602  63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43, 21, 23, 58, 17, 10,
2603  51, 25, 36, 32, 60, 20, 57, 16, 50, 31, 19, 15, 30, 14, 13, 12,
2604  };
2605 // disable unary minus on unsigned warning
2606 #if defined(_MSC_VER) && !defined(__NVCC__)
2607 #pragma warning(push)
2608 #pragma warning(disable : 4146)
2609 #endif
2610  return DeBruijn[uint64_t((v & -v) * UINT64_C(0x022FDD63CC95386D)) >> 58];
2611 #if defined(_MSC_VER) && !defined(__NVCC__)
2612 #pragma warning(pop)
2613 #endif
2614 
2615 #endif
2616 }
2617 
2618 /// @brief Returns the index of the highest, i.e. most significant, on bit in the specified 64 bit word
2619 ///
2620 /// @warning Assumes that at least one bit is set in the word, i.e. @a v != uint32_t(0)!
2622 __hostdev__ static inline uint32_t FindHighestOn(uint64_t v)
2623 {
2624  NANOVDB_ASSERT(v);
2625 #if (defined(__CUDA_ARCH__) || defined(__HIP__)) && defined(NANOVDB_USE_INTRINSICS)
2626  return sizeof(unsigned long) * 8 - 1 - __clzll(static_cast<unsigned long long int>(v));
2627 #elif defined(_MSC_VER) && defined(NANOVDB_USE_INTRINSICS)
2628  unsigned long index;
2629  _BitScanReverse64(&index, v);
2630  return static_cast<uint32_t>(index);
2631 #elif (defined(__GNUC__) || defined(__clang__)) && defined(NANOVDB_USE_INTRINSICS)
2632  return sizeof(unsigned long) * 8 - 1 - __builtin_clzll(v);
2633 #else
2634  const uint32_t* p = reinterpret_cast<const uint32_t*>(&v);
2635  return p[1] ? 32u + FindHighestOn(p[1]) : FindHighestOn(p[0]);
2636 #endif
2637 }
2638 
2639 // ----------------------------> CountOn <--------------------------------------
2640 
2641 /// @return Number of bits that are on in the specified 64-bit word
2643 __hostdev__ inline uint32_t CountOn(uint64_t v)
2644 {
2645 #if (defined(__CUDA_ARCH__) || defined(__HIP__)) && defined(NANOVDB_USE_INTRINSICS)
2646  //#warning Using popcll for CountOn
2647  return __popcll(v);
2648 // __popcnt64 intrinsic support was added in VS 2019 16.8
2649 #elif defined(_MSC_VER) && defined(_M_X64) && (_MSC_VER >= 1928) && defined(NANOVDB_USE_INTRINSICS)
2650  //#warning Using popcnt64 for CountOn
2651  return uint32_t(__popcnt64(v));
2652 #elif (defined(__GNUC__) || defined(__clang__)) && defined(NANOVDB_USE_INTRINSICS)
2653  //#warning Using builtin_popcountll for CountOn
2654  return __builtin_popcountll(v);
2655 #else // use software implementation
2656  //NANO_WARNING("Using software implementation for CountOn")
2657  v = v - ((v >> 1) & uint64_t(0x5555555555555555));
2658  v = (v & uint64_t(0x3333333333333333)) + ((v >> 2) & uint64_t(0x3333333333333333));
2659  return (((v + (v >> 4)) & uint64_t(0xF0F0F0F0F0F0F0F)) * uint64_t(0x101010101010101)) >> 56;
2660 #endif
2661 }
2662 
2663 // ----------------------------> BitFlags <--------------------------------------
2664 
2665 template<int N>
2666 struct BitArray;
2667 template<>
2668 struct BitArray<8>
2669 {
2670  uint8_t mFlags{0};
2671 };
2672 template<>
2673 struct BitArray<16>
2674 {
2675  uint16_t mFlags{0};
2676 };
2677 template<>
2678 struct BitArray<32>
2679 {
2680  uint32_t mFlags{0};
2681 };
2682 template<>
2683 struct BitArray<64>
2684 {
2685  uint64_t mFlags{0};
2686 };
2687 
2688 template<int N>
2689 class BitFlags : public BitArray<N>
2690 {
2691 protected:
2692  using BitArray<N>::mFlags;
2693 
2694 public:
2695  using Type = decltype(mFlags);
2697  BitFlags(std::initializer_list<uint8_t> list)
2698  {
2699  for (auto bit : list)
2700  mFlags |= static_cast<Type>(1 << bit);
2701  }
2702  template<typename MaskT>
2703  BitFlags(std::initializer_list<MaskT> list)
2704  {
2705  for (auto mask : list)
2706  mFlags |= static_cast<Type>(mask);
2707  }
2708  __hostdev__ Type data() const { return mFlags; }
2709  __hostdev__ Type& data() { return mFlags; }
2710  __hostdev__ void initBit(std::initializer_list<uint8_t> list)
2711  {
2712  mFlags = 0u;
2713  for (auto bit : list)
2714  mFlags |= static_cast<Type>(1 << bit);
2715  }
2716  template<typename MaskT>
2717  __hostdev__ void initMask(std::initializer_list<MaskT> list)
2718  {
2719  mFlags = 0u;
2720  for (auto mask : list)
2721  mFlags |= static_cast<Type>(mask);
2722  }
2723  //__hostdev__ Type& data() { return mFlags; }
2724  //__hostdev__ Type data() const { return mFlags; }
2725  __hostdev__ Type getFlags() const { return mFlags & (static_cast<Type>(GridFlags::End) - 1u); } // mask out everything except relevant bits
2726 
2727  __hostdev__ void setOn() { mFlags = ~Type(0u); }
2728  __hostdev__ void setOff() { mFlags = Type(0u); }
2729 
2730  __hostdev__ void setBitOn(uint8_t bit) { mFlags |= static_cast<Type>(1 << bit); }
2731  __hostdev__ void setBitOff(uint8_t bit) { mFlags &= ~static_cast<Type>(1 << bit); }
2732 
2733  __hostdev__ void setBitOn(std::initializer_list<uint8_t> list)
2734  {
2735  for (auto bit : list)
2736  mFlags |= static_cast<Type>(1 << bit);
2737  }
2738  __hostdev__ void setBitOff(std::initializer_list<uint8_t> list)
2739  {
2740  for (auto bit : list)
2741  mFlags &= ~static_cast<Type>(1 << bit);
2742  }
2743 
2744  template<typename MaskT>
2745  __hostdev__ void setMaskOn(MaskT mask) { mFlags |= static_cast<Type>(mask); }
2746  template<typename MaskT>
2747  __hostdev__ void setMaskOff(MaskT mask) { mFlags &= ~static_cast<Type>(mask); }
2748 
2749  template<typename MaskT>
2750  __hostdev__ void setMaskOn(std::initializer_list<MaskT> list)
2751  {
2752  for (auto mask : list)
2753  mFlags |= static_cast<Type>(mask);
2754  }
2755  template<typename MaskT>
2756  __hostdev__ void setMaskOff(std::initializer_list<MaskT> list)
2757  {
2758  for (auto mask : list)
2759  mFlags &= ~static_cast<Type>(mask);
2760  }
2761 
2762  __hostdev__ void setBit(uint8_t bit, bool on) { on ? this->setBitOn(bit) : this->setBitOff(bit); }
2763  template<typename MaskT>
2764  __hostdev__ void setMask(MaskT mask, bool on) { on ? this->setMaskOn(mask) : this->setMaskOff(mask); }
2765 
2766  __hostdev__ bool isOn() const { return mFlags == ~Type(0u); }
2767  __hostdev__ bool isOff() const { return mFlags == Type(0u); }
2768  __hostdev__ bool isBitOn(uint8_t bit) const { return 0 != (mFlags & static_cast<Type>(1 << bit)); }
2769  __hostdev__ bool isBitOff(uint8_t bit) const { return 0 == (mFlags & static_cast<Type>(1 << bit)); }
2770  template<typename MaskT>
2771  __hostdev__ bool isMaskOn(MaskT mask) const { return 0 != (mFlags & static_cast<Type>(mask)); }
2772  template<typename MaskT>
2773  __hostdev__ bool isMaskOff(MaskT mask) const { return 0 == (mFlags & static_cast<Type>(mask)); }
2774  /// @brief return true if any of the masks in the list are on
2775  template<typename MaskT>
2776  __hostdev__ bool isMaskOn(std::initializer_list<MaskT> list) const
2777  {
2778  for (auto mask : list)
2779  if (0 != (mFlags & static_cast<Type>(mask)))
2780  return true;
2781  return false;
2782  }
2783  /// @brief return true if any of the masks in the list are off
2784  template<typename MaskT>
2785  __hostdev__ bool isMaskOff(std::initializer_list<MaskT> list) const
2786  {
2787  for (auto mask : list)
2788  if (0 == (mFlags & static_cast<Type>(mask)))
2789  return true;
2790  return false;
2791  }
2792  /// @brief required for backwards compatibility
2794  {
2795  mFlags = n;
2796  return *this;
2797  }
2798 }; // BitFlags<N>
2799 
2800 // ----------------------------> Mask <--------------------------------------
2801 
2802 /// @brief Bit-mask to encode active states and facilitate sequential iterators
2803 /// and a fast codec for I/O compression.
2804 template<uint32_t LOG2DIM>
2805 class Mask
2806 {
2807 public:
2808  static constexpr uint32_t SIZE = 1U << (3 * LOG2DIM); // Number of bits in mask
2809  static constexpr uint32_t WORD_COUNT = SIZE >> 6; // Number of 64 bit words
2810 
2811  /// @brief Return the memory footprint in bytes of this Mask
2812  __hostdev__ static size_t memUsage() { return sizeof(Mask); }
2813 
2814  /// @brief Return the number of bits available in this Mask
2815  __hostdev__ static uint32_t bitCount() { return SIZE; }
2816 
2817  /// @brief Return the number of machine words used by this Mask
2818  __hostdev__ static uint32_t wordCount() { return WORD_COUNT; }
2819 
2820  /// @brief Return the total number of set bits in this Mask
2821  __hostdev__ uint32_t countOn() const
2822  {
2823  uint32_t sum = 0;
2824  for (const uint64_t *w = mWords, *q = w + WORD_COUNT; w != q; ++w)
2825  sum += CountOn(*w);
2826  return sum;
2827  }
2828 
2829  /// @brief Return the number of lower set bits in mask up to but excluding the i'th bit
2830  inline __hostdev__ uint32_t countOn(uint32_t i) const
2831  {
2832  uint32_t n = i >> 6, sum = CountOn(mWords[n] & ((uint64_t(1) << (i & 63u)) - 1u));
2833  for (const uint64_t* w = mWords; n--; ++w)
2834  sum += CountOn(*w);
2835  return sum;
2836  }
2837 
2838  template<bool On>
2839  class Iterator
2840  {
2841  public:
2843  : mPos(Mask::SIZE)
2844  , mParent(nullptr)
2845  {
2846  }
2847  __hostdev__ Iterator(uint32_t pos, const Mask* parent)
2848  : mPos(pos)
2849  , mParent(parent)
2850  {
2851  }
2852  Iterator(const Iterator&) = default;
2853  Iterator& operator=(const Iterator&) = default;
2854  __hostdev__ uint32_t operator*() const { return mPos; }
2855  __hostdev__ uint32_t pos() const { return mPos; }
2856  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
2858  {
2859  mPos = mParent->findNext<On>(mPos + 1);
2860  return *this;
2861  }
2863  {
2864  auto tmp = *this;
2865  ++(*this);
2866  return tmp;
2867  }
2868 
2869  private:
2870  uint32_t mPos;
2871  const Mask* mParent;
2872  }; // Member class Iterator
2873 
2875  {
2876  public:
2878  : mPos(pos)
2879  {
2880  }
2881  DenseIterator& operator=(const DenseIterator&) = default;
2882  __hostdev__ uint32_t operator*() const { return mPos; }
2883  __hostdev__ uint32_t pos() const { return mPos; }
2884  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
2886  {
2887  ++mPos;
2888  return *this;
2889  }
2891  {
2892  auto tmp = *this;
2893  ++mPos;
2894  return tmp;
2895  }
2896 
2897  private:
2898  uint32_t mPos;
2899  }; // Member class DenseIterator
2900 
2903 
2904  __hostdev__ OnIterator beginOn() const { return OnIterator(this->findFirst<true>(), this); }
2905 
2906  __hostdev__ OffIterator beginOff() const { return OffIterator(this->findFirst<false>(), this); }
2907 
2909 
2910  /// @brief Initialize all bits to zero.
2912  {
2913  for (uint32_t i = 0; i < WORD_COUNT; ++i)
2914  mWords[i] = 0;
2915  }
2917  {
2918  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
2919  for (uint32_t i = 0; i < WORD_COUNT; ++i)
2920  mWords[i] = v;
2921  }
2922 
2923  /// @brief Copy constructor
2924  __hostdev__ Mask(const Mask& other)
2925  {
2926  for (uint32_t i = 0; i < WORD_COUNT; ++i)
2927  mWords[i] = other.mWords[i];
2928  }
2929 
2930  /// @brief Return a pointer to the list of words of the bit mask
2931  __hostdev__ uint64_t* words() { return mWords; }
2932  __hostdev__ const uint64_t* words() const { return mWords; }
2933 
2934  /// @brief Assignment operator that works with openvdb::util::NodeMask
2935  template<typename MaskT = Mask>
2937  {
2938  static_assert(sizeof(Mask) == sizeof(MaskT), "Mismatching sizeof");
2939  static_assert(WORD_COUNT == MaskT::WORD_COUNT, "Mismatching word count");
2940  static_assert(LOG2DIM == MaskT::LOG2DIM, "Mismatching LOG2DIM");
2941  auto* src = reinterpret_cast<const uint64_t*>(&other);
2942  for (uint64_t *dst = mWords, *end = dst + WORD_COUNT; dst != end; ++dst)
2943  *dst = *src++;
2944  return *this;
2945  }
2946 
2948  {
2949  memcpy64(mWords, other.mWords, WORD_COUNT);
2950  return *this;
2951  }
2952 
2953  __hostdev__ bool operator==(const Mask& other) const
2954  {
2955  for (uint32_t i = 0; i < WORD_COUNT; ++i) {
2956  if (mWords[i] != other.mWords[i])
2957  return false;
2958  }
2959  return true;
2960  }
2961 
2962  __hostdev__ bool operator!=(const Mask& other) const { return !((*this) == other); }
2963 
2964  /// @brief Return true if the given bit is set.
2965  __hostdev__ bool isOn(uint32_t n) const { return 0 != (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
2966 
2967  /// @brief Return true if the given bit is NOT set.
2968  __hostdev__ bool isOff(uint32_t n) const { return 0 == (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
2969 
2970  /// @brief Return true if all the bits are set in this Mask.
2971  __hostdev__ bool isOn() const
2972  {
2973  for (uint32_t i = 0; i < WORD_COUNT; ++i)
2974  if (mWords[i] != ~uint64_t(0))
2975  return false;
2976  return true;
2977  }
2978 
2979  /// @brief Return true if none of the bits are set in this Mask.
2980  __hostdev__ bool isOff() const
2981  {
2982  for (uint32_t i = 0; i < WORD_COUNT; ++i)
2983  if (mWords[i] != uint64_t(0))
2984  return false;
2985  return true;
2986  }
2987 
2988  /// @brief Set the specified bit on.
2989  __hostdev__ void setOn(uint32_t n) { mWords[n >> 6] |= uint64_t(1) << (n & 63); }
2990  /// @brief Set the specified bit off.
2991  __hostdev__ void setOff(uint32_t n) { mWords[n >> 6] &= ~(uint64_t(1) << (n & 63)); }
2992 
2993 #if defined(__CUDACC__) // the following functions only run on the GPU!
2994  __device__ inline void setOnAtomic(uint32_t n)
2995  {
2996  atomicOr(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), 1ull << (n & 63));
2997  }
2998  __device__ inline void setOffAtomic(uint32_t n)
2999  {
3000  atomicAnd(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), ~(1ull << (n & 63)));
3001  }
3002  __device__ inline void setAtomic(uint32_t n, bool on)
3003  {
3004  on ? this->setOnAtomic(n) : this->setOffAtomic(n);
3005  }
3006 #endif
3007  /// @brief Set the specified bit on or off.
3008  __hostdev__ void set(uint32_t n, bool on)
3009  {
3010 #if 1 // switch between branchless
3011  auto& word = mWords[n >> 6];
3012  n &= 63;
3013  word &= ~(uint64_t(1) << n);
3014  word |= uint64_t(on) << n;
3015 #else
3016  on ? this->setOn(n) : this->setOff(n);
3017 #endif
3018  }
3019 
3020  /// @brief Set all bits on
3022  {
3023  for (uint32_t i = 0; i < WORD_COUNT; ++i)
3024  mWords[i] = ~uint64_t(0);
3025  }
3026 
3027  /// @brief Set all bits off
3029  {
3030  for (uint32_t i = 0; i < WORD_COUNT; ++i)
3031  mWords[i] = uint64_t(0);
3032  }
3033 
3034  /// @brief Set all bits off
3035  __hostdev__ void set(bool on)
3036  {
3037  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
3038  for (uint32_t i = 0; i < WORD_COUNT; ++i)
3039  mWords[i] = v;
3040  }
3041  /// brief Toggle the state of all bits in the mask
3043  {
3044  uint32_t n = WORD_COUNT;
3045  for (auto* w = mWords; n--; ++w)
3046  *w = ~*w;
3047  }
3048  __hostdev__ void toggle(uint32_t n) { mWords[n >> 6] ^= uint64_t(1) << (n & 63); }
3049 
3050  /// @brief Bitwise intersection
3052  {
3053  uint64_t* w1 = mWords;
3054  const uint64_t* w2 = other.mWords;
3055  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2)
3056  *w1 &= *w2;
3057  return *this;
3058  }
3059  /// @brief Bitwise union
3061  {
3062  uint64_t* w1 = mWords;
3063  const uint64_t* w2 = other.mWords;
3064  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2)
3065  *w1 |= *w2;
3066  return *this;
3067  }
3068  /// @brief Bitwise difference
3070  {
3071  uint64_t* w1 = mWords;
3072  const uint64_t* w2 = other.mWords;
3073  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2)
3074  *w1 &= ~*w2;
3075  return *this;
3076  }
3077  /// @brief Bitwise XOR
3079  {
3080  uint64_t* w1 = mWords;
3081  const uint64_t* w2 = other.mWords;
3082  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2)
3083  *w1 ^= *w2;
3084  return *this;
3085  }
3086 
3088  template<bool ON>
3089  __hostdev__ uint32_t findFirst() const
3090  {
3091  uint32_t n = 0u;
3092  const uint64_t* w = mWords;
3093  for (; n < WORD_COUNT && !(ON ? *w : ~*w); ++w, ++n)
3094  ;
3095  return n < WORD_COUNT ? (n << 6) + FindLowestOn(ON ? *w : ~*w) : SIZE;
3096  }
3097 
3099  template<bool ON>
3100  __hostdev__ uint32_t findNext(uint32_t start) const
3101  {
3102  uint32_t n = start >> 6; // initiate
3103  if (n >= WORD_COUNT)
3104  return SIZE; // check for out of bounds
3105  uint32_t m = start & 63u;
3106  uint64_t b = ON ? mWords[n] : ~mWords[n];
3107  if (b & (uint64_t(1u) << m))
3108  return start; // simple case: start is on/off
3109  b &= ~uint64_t(0u) << m; // mask out lower bits
3110  while (!b && ++n < WORD_COUNT)
3111  b = ON ? mWords[n] : ~mWords[n]; // find next non-zero word
3112  return b ? (n << 6) + FindLowestOn(b) : SIZE; // catch last word=0
3113  }
3114 
3116  template<bool ON>
3117  __hostdev__ uint32_t findPrev(uint32_t start) const
3118  {
3119  uint32_t n = start >> 6; // initiate
3120  if (n >= WORD_COUNT)
3121  return SIZE; // check for out of bounds
3122  uint32_t m = start & 63u;
3123  uint64_t b = ON ? mWords[n] : ~mWords[n];
3124  if (b & (uint64_t(1u) << m))
3125  return start; // simple case: start is on/off
3126  b &= (uint64_t(1u) << m) - 1u; // mask out higher bits
3127  while (!b && n)
3128  b = ON ? mWords[--n] : ~mWords[--n]; // find previous non-zero word
3129  return b ? (n << 6) + FindHighestOn(b) : SIZE; // catch first word=0
3130  }
3131 
3132 private:
3133  uint64_t mWords[WORD_COUNT];
3134 }; // Mask class
3135 
3136 // ----------------------------> Map <--------------------------------------
3137 
3138 /// @brief Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation
3139 struct Map
3140 { // 264B (not 32B aligned!)
3141  float mMatF[9]; // 9*4B <- 3x3 matrix
3142  float mInvMatF[9]; // 9*4B <- 3x3 matrix
3143  float mVecF[3]; // 3*4B <- translation
3144  float mTaperF; // 4B, placeholder for taper value
3145  double mMatD[9]; // 9*8B <- 3x3 matrix
3146  double mInvMatD[9]; // 9*8B <- 3x3 matrix
3147  double mVecD[3]; // 3*8B <- translation
3148  double mTaperD; // 8B, placeholder for taper value
3149 
3150  /// @brief Default constructor for the identity map
3152  : mMatF{1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
3153  , mInvMatF{1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
3154  , mVecF{0.0f, 0.0f, 0.0f}
3155  , mTaperF{1.0f}
3156  , mMatD{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
3157  , mInvMatD{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
3158  , mVecD{0.0, 0.0, 0.0}
3159  , mTaperD{1.0}
3160  {
3161  }
3162  __hostdev__ Map(double s, const Vec3d& t = Vec3d(0.0, 0.0, 0.0))
3163  : mMatF{float(s), 0.0f, 0.0f, 0.0f, float(s), 0.0f, 0.0f, 0.0f, float(s)}
3164  , mInvMatF{1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s)}
3165  , mVecF{float(t[0]), float(t[1]), float(t[2])}
3166  , mTaperF{1.0f}
3167  , mMatD{s, 0.0, 0.0, 0.0, s, 0.0, 0.0, 0.0, s}
3168  , mInvMatD{1.0 / s, 0.0, 0.0, 0.0, 1.0 / s, 0.0, 0.0, 0.0, 1.0 / s}
3169  , mVecD{t[0], t[1], t[2]}
3170  , mTaperD{1.0}
3171  {
3172  }
3173 
3174  /// @brief Initialize the member data from 3x3 or 4x4 matrices
3175  /// @note This is not _hostdev__ since then MatT=openvdb::Mat4d will produce warnings
3176  template<typename MatT, typename Vec3T>
3177  void set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper = 1.0);
3178 
3179  /// @brief Initialize the member data from 4x4 matrices
3180  /// @note The last (4th) row of invMat is actually ignored.
3181  /// This is not _hostdev__ since then Mat4T=openvdb::Mat4d will produce warnings
3182  template<typename Mat4T>
3183  void set(const Mat4T& mat, const Mat4T& invMat, double taper = 1.0) { this->set(mat, invMat, mat[3], taper); }
3184 
3185  template<typename Vec3T>
3186  void set(double scale, const Vec3T& translation, double taper = 1.0);
3187 
3188  /// @brief Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
3189  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
3190  /// @tparam Vec3T Template type of the 3D vector to be mapped
3191  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3192  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
3193  template<typename Vec3T>
3194  __hostdev__ Vec3T applyMap(const Vec3T& ijk) const { return matMult(mMatD, mVecD, ijk); }
3195 
3196  /// @brief Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
3197  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
3198  /// @tparam Vec3T Template type of the 3D vector to be mapped
3199  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3200  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
3201  template<typename Vec3T>
3202  __hostdev__ Vec3T applyMapF(const Vec3T& ijk) const { return matMult(mMatF, mVecF, ijk); }
3203 
3204  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
3205  /// e.g. scale and rotation WITHOUT translation.
3206  /// @note Typically this operation is used for scale and rotation from index -> world mapping
3207  /// @tparam Vec3T Template type of the 3D vector to be mapped
3208  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3209  /// @return linear forward 3x3 mapping of the input vector
3210  template<typename Vec3T>
3211  __hostdev__ Vec3T applyJacobian(const Vec3T& ijk) const { return matMult(mMatD, ijk); }
3212 
3213  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
3214  /// e.g. scale and rotation WITHOUT translation.
3215  /// @note Typically this operation is used for scale and rotation from index -> world mapping
3216  /// @tparam Vec3T Template type of the 3D vector to be mapped
3217  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3218  /// @return linear forward 3x3 mapping of the input vector
3219  template<typename Vec3T>
3220  __hostdev__ Vec3T applyJacobianF(const Vec3T& ijk) const { return matMult(mMatF, ijk); }
3221 
3222  /// @brief Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
3223  /// @note Typically this operation is used for the world -> index mapping
3224  /// @tparam Vec3T Template type of the 3D vector to be mapped
3225  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
3226  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
3227  template<typename Vec3T>
3228  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const
3229  {
3230  return matMult(mInvMatD, Vec3T(xyz[0] - mVecD[0], xyz[1] - mVecD[1], xyz[2] - mVecD[2]));
3231  }
3232 
3233  /// @brief Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
3234  /// @note Typically this operation is used for the world -> index mapping
3235  /// @tparam Vec3T Template type of the 3D vector to be mapped
3236  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
3237  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
3238  template<typename Vec3T>
3239  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const
3240  {
3241  return matMult(mInvMatF, Vec3T(xyz[0] - mVecF[0], xyz[1] - mVecF[1], xyz[2] - mVecF[2]));
3242  }
3243 
3244  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
3245  /// e.g. inverse scale and inverse rotation WITHOUT translation.
3246  /// @note Typically this operation is used for scale and rotation from world -> index mapping
3247  /// @tparam Vec3T Template type of the 3D vector to be mapped
3248  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3249  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
3250  template<typename Vec3T>
3251  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return matMult(mInvMatD, xyz); }
3252 
3253  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
3254  /// e.g. inverse scale and inverse rotation WITHOUT translation.
3255  /// @note Typically this operation is used for scale and rotation from world -> index mapping
3256  /// @tparam Vec3T Template type of the 3D vector to be mapped
3257  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3258  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
3259  template<typename Vec3T>
3260  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return matMult(mInvMatF, xyz); }
3261 
3262  /// @brief Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
3263  /// e.g. inverse scale and inverse rotation WITHOUT translation.
3264  /// @note Typically this operation is used for scale and rotation from world -> index mapping
3265  /// @tparam Vec3T Template type of the 3D vector to be mapped
3266  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
3267  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
3268  template<typename Vec3T>
3269  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return matMultT(mInvMatD, xyz); }
3270  template<typename Vec3T>
3271  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return matMultT(mInvMatF, xyz); }
3272 
3273  /// @brief Return a voxels size in each coordinate direction, measured at the origin
3274  __hostdev__ Vec3d getVoxelSize() const { return this->applyMap(Vec3d(1)) - this->applyMap(Vec3d(0)); }
3275 }; // Map
3276 
3277 template<typename MatT, typename Vec3T>
3278 inline void Map::set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper)
3279 {
3280  float * mf = mMatF, *vf = mVecF, *mif = mInvMatF;
3281  double *md = mMatD, *vd = mVecD, *mid = mInvMatD;
3282  mTaperF = static_cast<float>(taper);
3283  mTaperD = taper;
3284  for (int i = 0; i < 3; ++i) {
3285  *vd++ = translate[i]; //translation
3286  *vf++ = static_cast<float>(translate[i]); //translation
3287  for (int j = 0; j < 3; ++j) {
3288  *md++ = mat[j][i]; //transposed
3289  *mid++ = invMat[j][i];
3290  *mf++ = static_cast<float>(mat[j][i]); //transposed
3291  *mif++ = static_cast<float>(invMat[j][i]);
3292  }
3293  }
3294 }
3295 
3296 template<typename Vec3T>
3297 inline void Map::set(double dx, const Vec3T& trans, double taper)
3298 {
3299  NANOVDB_ASSERT(dx > 0.0);
3300  const double mat[3][3] = { {dx, 0.0, 0.0}, // row 0
3301  {0.0, dx, 0.0}, // row 1
3302  {0.0, 0.0, dx} }; // row 2
3303  const double idx = 1.0 / dx;
3304  const double invMat[3][3] = { {idx, 0.0, 0.0}, // row 0
3305  {0.0, idx, 0.0}, // row 1
3306  {0.0, 0.0, idx} }; // row 2
3307  this->set(mat, invMat, trans, taper);
3308 }
3309 
3310 // ----------------------------> GridBlindMetaData <--------------------------------------
3311 
3312 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridBlindMetaData
3313 { // 288 bytes
3314  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less!
3315  int64_t mDataOffset; // byte offset to the blind data, relative to this GridBlindMetaData.
3316  uint64_t mValueCount; // number of blind values, e.g. point count
3317  uint32_t mValueSize;// byte size of each value, e.g. 4 if mDataType=Float and 1 if mDataType=Unknown since that amounts to char
3318  GridBlindDataSemantic mSemantic; // semantic meaning of the data.
3319  GridBlindDataClass mDataClass; // 4 bytes
3320  GridType mDataType; // 4 bytes
3321  char mName[MaxNameSize]; // note this includes the NULL termination
3322  // no padding required for 32 byte alignment
3323 
3324  // disallow copy-construction since methods like blindData and getBlindData uses the this pointer!
3325  GridBlindMetaData(const GridBlindMetaData&) = delete;
3326 
3327  // disallow copy-assignment since methods like blindData and getBlindData uses the this pointer!
3328  const GridBlindMetaData& operator=(const GridBlindMetaData&) = delete;
3329 
3330  __hostdev__ void setBlindData(void* blindData) { mDataOffset = PtrDiff(blindData, this); }
3331 
3332  // unsafe
3333  __hostdev__ const void* blindData() const {return PtrAdd<void>(this, mDataOffset);}
3334 
3335  /// @brief Get a const pointer to the blind data represented by this meta data
3336  /// @tparam BlindDataT Expected value type of the blind data.
3337  /// @return Returns NULL if mGridType!=mapToGridType<BlindDataT>(), else a const point of type BlindDataT.
3338  /// @note Use mDataType=Unknown if BlindDataT is a custom data type unknown to NanoVDB.
3339  template<typename BlindDataT>
3340  __hostdev__ const BlindDataT* getBlindData() const
3341  {
3342  //if (mDataType != mapToGridType<BlindDataT>()) printf("getBlindData mismatch\n");
3343  return mDataType == mapToGridType<BlindDataT>() ? PtrAdd<BlindDataT>(this, mDataOffset) : nullptr;
3344  }
3345 
3346  /// @brief return true if this meta data has a valid combination of semantic, class and value tags
3347  __hostdev__ bool isValid() const
3348  {
3349  auto check = [&]()->bool{
3350  switch (mDataType){
3351  case GridType::Unknown: return mValueSize==1u;// i.e. we encode data as mValueCount chars
3352  case GridType::Float: return mValueSize==4u;
3353  case GridType::Double: return mValueSize==8u;
3354  case GridType::Int16: return mValueSize==2u;
3355  case GridType::Int32: return mValueSize==4u;
3356  case GridType::Int64: return mValueSize==8u;
3357  case GridType::Vec3f: return mValueSize==12u;
3358  case GridType::Vec3d: return mValueSize==24u;
3359  case GridType::Half: return mValueSize==2u;
3360  case GridType::RGBA8: return mValueSize==4u;
3361  case GridType::Fp8: return mValueSize==1u;
3362  case GridType::Fp16: return mValueSize==2u;
3363  case GridType::Vec4f: return mValueSize==16u;
3364  case GridType::Vec4d: return mValueSize==32u;
3365  case GridType::Vec3u8: return mValueSize==3u;
3366  case GridType::Vec3u16: return mValueSize==6u;
3367  default: return true;}// all other combinations are valid
3368  };
3369  return nanovdb::isValid(mDataClass, mSemantic, mDataType) && check();
3370  }
3371 
3372  /// @brief return size in bytes of the blind data represented by this blind meta data
3373  /// @note This size includes possible padding for 32 byte alignment. The actual amount
3374  /// of bind data is mValueCount * mValueSize
3375  __hostdev__ uint64_t blindDataSize() const
3376  {
3377  return AlignUp<NANOVDB_DATA_ALIGNMENT>(mValueCount * mValueSize);
3378  }
3379 }; // GridBlindMetaData
3380 
3381 // ----------------------------> NodeTrait <--------------------------------------
3382 
3383 /// @brief Struct to derive node type from its level in a given
3384 /// grid, tree or root while preserving constness
3385 template<typename GridOrTreeOrRootT, int LEVEL>
3386 struct NodeTrait;
3387 
3388 // Partial template specialization of above Node struct
3389 template<typename GridOrTreeOrRootT>
3390 struct NodeTrait<GridOrTreeOrRootT, 0>
3391 {
3392  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3393  using Type = typename GridOrTreeOrRootT::LeafNodeType;
3394  using type = typename GridOrTreeOrRootT::LeafNodeType;
3395 };
3396 template<typename GridOrTreeOrRootT>
3397 struct NodeTrait<const GridOrTreeOrRootT, 0>
3398 {
3399  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3400  using Type = const typename GridOrTreeOrRootT::LeafNodeType;
3401  using type = const typename GridOrTreeOrRootT::LeafNodeType;
3402 };
3403 
3404 template<typename GridOrTreeOrRootT>
3405 struct NodeTrait<GridOrTreeOrRootT, 1>
3406 {
3407  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3408  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
3409  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
3410 };
3411 template<typename GridOrTreeOrRootT>
3412 struct NodeTrait<const GridOrTreeOrRootT, 1>
3413 {
3414  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3415  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
3416  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
3417 };
3418 template<typename GridOrTreeOrRootT>
3419 struct NodeTrait<GridOrTreeOrRootT, 2>
3420 {
3421  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3422  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
3423  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
3424 };
3425 template<typename GridOrTreeOrRootT>
3426 struct NodeTrait<const GridOrTreeOrRootT, 2>
3427 {
3428  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3429  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
3430  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
3431 };
3432 template<typename GridOrTreeOrRootT>
3433 struct NodeTrait<GridOrTreeOrRootT, 3>
3434 {
3435  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3436  using Type = typename GridOrTreeOrRootT::RootNodeType;
3437  using type = typename GridOrTreeOrRootT::RootNodeType;
3438 };
3439 
3440 template<typename GridOrTreeOrRootT>
3441 struct NodeTrait<const GridOrTreeOrRootT, 3>
3442 {
3443  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
3444  using Type = const typename GridOrTreeOrRootT::RootNodeType;
3445  using type = const typename GridOrTreeOrRootT::RootNodeType;
3446 };
3447 
3448 // ----------------------------> Froward decelerations of random access methods <--------------------------------------
3449 
3450 template<typename BuildT>
3451 struct GetValue;
3452 template<typename BuildT>
3453 struct SetValue;
3454 template<typename BuildT>
3455 struct SetVoxel;
3456 template<typename BuildT>
3457 struct GetState;
3458 template<typename BuildT>
3459 struct GetDim;
3460 template<typename BuildT>
3461 struct GetLeaf;
3462 template<typename BuildT>
3463 struct ProbeValue;
3464 template<typename BuildT>
3466 
3467 // ----------------------------> Grid <--------------------------------------
3468 
3469 /*
3470  The following class and comment is for internal use only
3471 
3472  Memory layout:
3473 
3474  Grid -> 39 x double (world bbox and affine transformation)
3475  Tree -> Root 3 x ValueType + int32_t + N x Tiles (background,min,max,tileCount + tileCount x Tiles)
3476 
3477  N2 upper InternalNodes each with 2 bit masks, N2 tiles, and min/max values
3478 
3479  N1 lower InternalNodes each with 2 bit masks, N1 tiles, and min/max values
3480 
3481  N0 LeafNodes each with a bit mask, N0 ValueTypes and min/max
3482 
3483  Example layout: ("---" implies it has a custom offset, "..." implies zero or more)
3484  [GridData][TreeData]---[RootData][ROOT TILES...]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
3485 */
3486 
3487 /// @brief Struct with all the member data of the Grid (useful during serialization of an openvdb grid)
3488 ///
3489 /// @note The transform is assumed to be affine (so linear) and have uniform scale! So frustum transforms
3490 /// and non-uniform scaling are not supported (primarily because they complicate ray-tracing in index space)
3491 ///
3492 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3493 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridData
3494 { // sizeof(GridData) = 672B
3495  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less
3496  uint64_t mMagic; // 8B (0) magic to validate it is valid grid data.
3497  uint64_t mChecksum; // 8B (8). Checksum of grid buffer.
3498  Version mVersion; // 4B (16) major, minor, and patch version numbers
3499  BitFlags<32> mFlags; // 4B (20). flags for grid.
3500  uint32_t mGridIndex; // 4B (24). Index of this grid in the buffer
3501  uint32_t mGridCount; // 4B (28). Total number of grids in the buffer
3502  uint64_t mGridSize; // 8B (32). byte count of this entire grid occupied in the buffer.
3503  char mGridName[MaxNameSize]; // 256B (40)
3504  Map mMap; // 264B (296). affine transformation between index and world space in both single and double precision
3505  BBox<Vec3d> mWorldBBox; // 48B (560). floating-point AABB of active values in WORLD SPACE (2 x 3 doubles)
3506  Vec3d mVoxelSize; // 24B (608). size of a voxel in world units
3507  GridClass mGridClass; // 4B (632).
3508  GridType mGridType; // 4B (636).
3509  int64_t mBlindMetadataOffset; // 8B (640). offset to beginning of GridBlindMetaData structures that follow this grid.
3510  uint32_t mBlindMetadataCount; // 4B (648). count of GridBlindMetaData structures that follow this grid.
3511  uint32_t mData0; // 4B (652)
3512  uint64_t mData1, mData2; // 2x8B (656) padding to 32 B alignment. mData1 is use for the total number of values indexed by an IndexGrid
3513  /// @brief Use this method to initiate most member dat
3514  __hostdev__ GridData& operator=(const GridData& other)
3515  {
3516  static_assert(8 * 84 == sizeof(GridData), "GridData has unexpected size");
3517  memcpy64(this, &other, 84);
3518  return *this;
3519  }
3520  __hostdev__ void init(std::initializer_list<GridFlags> list = {GridFlags::IsBreadthFirst},
3521  uint64_t gridSize = 0u,
3522  const Map& map = Map(),
3523  GridType gridType = GridType::Unknown,
3524  GridClass gridClass = GridClass::Unknown)
3525  {
3526 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
3527  mMagic = NANOVDB_MAGIC_GRID;
3528 #else
3529  mMagic = NANOVDB_MAGIC_NUMBER;
3530 #endif
3531  mChecksum = ~uint64_t(0);// all 64 bits ON means checksum is disabled
3532  mVersion = Version();
3533  mFlags.initMask(list);
3534  mGridIndex = 0u;
3535  mGridCount = 1u;
3536  mGridSize = gridSize;
3537  mGridName[0] = '\0';
3538  mMap = map;
3539  mWorldBBox = BBox<Vec3d>();// invalid bbox
3540  mVoxelSize = map.getVoxelSize();
3541  mGridClass = gridClass;
3542  mGridType = gridType;
3543  mBlindMetadataOffset = mGridSize; // i.e. no blind data
3544  mBlindMetadataCount = 0u; // i.e. no blind data
3545  mData0 = 0u; // zero padding
3546  mData1 = 0u; // only used for index and point grids
3547  mData2 = NANOVDB_MAGIC_GRID; // since version 32.6.0 (might be removed in the future)
3548  }
3549  /// @brief return true if the magic number and the version are both valid
3550  __hostdev__ bool isValid() const {
3551  if (mMagic == NANOVDB_MAGIC_GRID || mData2 == NANOVDB_MAGIC_GRID) return true;
3552  bool test = mMagic == NANOVDB_MAGIC_NUMBER;// could be GridData or io::FileHeader
3553  if (test) test = mVersion.isCompatible();
3554  if (test) test = mGridCount > 0u && mGridIndex < mGridCount;
3555  if (test) test = mGridClass < GridClass::End && mGridType < GridType::End;
3556  return test;
3557  }
3558  // Set and unset various bit flags
3559  __hostdev__ void setMinMaxOn(bool on = true) { mFlags.setMask(GridFlags::HasMinMax, on); }
3560  __hostdev__ void setBBoxOn(bool on = true) { mFlags.setMask(GridFlags::HasBBox, on); }
3561  __hostdev__ void setLongGridNameOn(bool on = true) { mFlags.setMask(GridFlags::HasLongGridName, on); }
3562  __hostdev__ void setAverageOn(bool on = true) { mFlags.setMask(GridFlags::HasAverage, on); }
3563  __hostdev__ void setStdDeviationOn(bool on = true) { mFlags.setMask(GridFlags::HasStdDeviation, on); }
3564  __hostdev__ bool setGridName(const char* src)
3565  {
3566  char *dst = mGridName, *end = dst + MaxNameSize;
3567  while (*src != '\0' && dst < end - 1)
3568  *dst++ = *src++;
3569  while (dst < end)
3570  *dst++ = '\0';
3571  return *src == '\0'; // returns true if input grid name is NOT longer than MaxNameSize characters
3572  }
3573  // Affine transformations based on double precision
3574  template<typename Vec3T>
3575  __hostdev__ Vec3T applyMap(const Vec3T& xyz) const { return mMap.applyMap(xyz); } // Pos: index -> world
3576  template<typename Vec3T>
3577  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const { return mMap.applyInverseMap(xyz); } // Pos: world -> index
3578  template<typename Vec3T>
3579  __hostdev__ Vec3T applyJacobian(const Vec3T& xyz) const { return mMap.applyJacobian(xyz); } // Dir: index -> world
3580  template<typename Vec3T>
3581  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return mMap.applyInverseJacobian(xyz); } // Dir: world -> index
3582  template<typename Vec3T>
3583  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return mMap.applyIJT(xyz); }
3584  // Affine transformations based on single precision
3585  template<typename Vec3T>
3586  __hostdev__ Vec3T applyMapF(const Vec3T& xyz) const { return mMap.applyMapF(xyz); } // Pos: index -> world
3587  template<typename Vec3T>
3588  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const { return mMap.applyInverseMapF(xyz); } // Pos: world -> index
3589  template<typename Vec3T>
3590  __hostdev__ Vec3T applyJacobianF(const Vec3T& xyz) const { return mMap.applyJacobianF(xyz); } // Dir: index -> world
3591  template<typename Vec3T>
3592  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return mMap.applyInverseJacobianF(xyz); } // Dir: world -> index
3593  template<typename Vec3T>
3594  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return mMap.applyIJTF(xyz); }
3595 
3596  // @brief Return a non-const uint8_t pointer to the tree
3597  __hostdev__ uint8_t* treePtr() { return reinterpret_cast<uint8_t*>(this + 1); }// TreeData is always right after GridData
3598  //__hostdev__ TreeData* treePtr() { return reinterpret_cast<TreeData*>(this + 1); }// TreeData is always right after GridData
3599 
3600  // @brief Return a const uint8_t pointer to the tree
3601  __hostdev__ const uint8_t* treePtr() const { return reinterpret_cast<const uint8_t*>(this + 1); }// TreeData is always right after GridData
3602  //__hostdev__ const TreeData* treePtr() const { return reinterpret_cast<const TreeData*>(this + 1); }// TreeData is always right after GridData
3603 
3604  /// @brief Return a non-const uint8_t pointer to the first node at @c LEVEL
3605  /// @tparam LEVEL of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
3606  /// @warning If not nodes exist at @c LEVEL NULL is returned
3607  template <uint32_t LEVEL>
3608  __hostdev__ const uint8_t* nodePtr() const
3609  {
3610  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
3611  auto *treeData = this->treePtr();
3612  auto nodeOffset = *reinterpret_cast<const uint64_t*>(treeData + 8*LEVEL);// skip LEVEL uint64_t
3613  return nodeOffset ? PtrAdd<uint8_t>(treeData, nodeOffset) : nullptr;
3614  }
3615 
3616  /// @brief Return a non-const uint8_t pointer to the first node at @c LEVEL
3617  /// @tparam LEVEL of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
3618  /// @warning If not nodes exist at @c LEVEL NULL is returned
3619  template <uint32_t LEVEL>
3620  __hostdev__ uint8_t* nodePtr(){return const_cast<uint8_t*>(const_cast<const GridData*>(this)->template nodePtr<LEVEL>());}
3621 
3622  /// @brief Returns a const reference to the blindMetaData at the specified linear offset.
3623  ///
3624  /// @warning The linear offset is assumed to be in the valid range
3625  __hostdev__ const GridBlindMetaData* blindMetaData(uint32_t n) const
3626  {
3627  NANOVDB_ASSERT(n < mBlindMetadataCount);
3628  return PtrAdd<GridBlindMetaData>(this, mBlindMetadataOffset) + n;
3629  }
3630 
3631  __hostdev__ const char* gridName() const
3632  {
3633  if (mFlags.isMaskOn(GridFlags::HasLongGridName)) {// search for first blind meta data that contains a name
3634  NANOVDB_ASSERT(mBlindMetadataCount > 0);
3635  for (uint32_t i = 0; i < mBlindMetadataCount; ++i) {
3636  const auto* metaData = this->blindMetaData(i);// EXTREMELY important to be a pointer
3637  if (metaData->mDataClass == GridBlindDataClass::GridName) {
3638  NANOVDB_ASSERT(metaData->mDataType == GridType::Unknown);
3639  return metaData->template getBlindData<const char>();
3640  }
3641  }
3642  NANOVDB_ASSERT(false); // should never hit this!
3643  }
3644  return mGridName;
3645  }
3646 
3647  /// @brief Return memory usage in bytes for this class only.
3648  __hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
3649 
3650  /// @brief return AABB of active values in world space
3651  __hostdev__ const BBox<Vec3d>& worldBBox() const { return mWorldBBox; }
3652 
3653  /// @brief return AABB of active values in index space
3654  __hostdev__ const CoordBBox& indexBBox() const {return *(const CoordBBox*)(this->nodePtr<3>());}
3655 
3656  /// @brief return the root table has size
3657  __hostdev__ uint32_t rootTableSize() const {
3658  if (const uint8_t *root = this->nodePtr<3>()) {
3659  return *(const uint32_t*)(root + sizeof(CoordBBox));
3660  }
3661  return 0u;
3662  }
3663 
3664  /// @brief test if the grid is empty, e.i the root table has size 0
3665  /// @return true if this grid contains not data whatsoever
3666  __hostdev__ bool isEmpty() const {return this->rootTableSize() == 0u;}
3667 
3668  /// @brief return true if RootData follows TreeData in memory without any extra padding
3669  /// @details TreeData is always following right after GridData, but the same might not be true for RootData
3670  __hostdev__ bool isRootConnected() const { return *(const uint64_t*)((const char*)(this + 1) + 24) == 64u;}
3671 }; // GridData
3672 
3673 // Forward declaration of accelerated random access class
3674 template<typename BuildT, int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1>
3676 
3677 template<typename BuildT>
3679 
3680 /// @brief Highest level of the data structure. Contains a tree and a world->index
3681 /// transform (that currently only supports uniform scaling and translation).
3682 ///
3683 /// @note This the API of this class to interface with client code
3684 template<typename TreeT>
3685 class Grid : public GridData
3686 {
3687 public:
3688  using TreeType = TreeT;
3689  using RootType = typename TreeT::RootType;
3691  using UpperNodeType = typename RootNodeType::ChildNodeType;
3692  using LowerNodeType = typename UpperNodeType::ChildNodeType;
3693  using LeafNodeType = typename RootType::LeafNodeType;
3694  using DataType = GridData;
3695  using ValueType = typename TreeT::ValueType;
3696  using BuildType = typename TreeT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3697  using CoordType = typename TreeT::CoordType;
3699 
3700  /// @brief Disallow constructions, copy and assignment
3701  ///
3702  /// @note Only a Serializer, defined elsewhere, can instantiate this class
3703  Grid(const Grid&) = delete;
3704  Grid& operator=(const Grid&) = delete;
3705  ~Grid() = delete;
3706 
3707  __hostdev__ Version version() const { return DataType::mVersion; }
3708 
3709  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
3710 
3711  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
3712 
3713  /// @brief Return memory usage in bytes for this class only.
3714  //__hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
3715 
3716  /// @brief Return the memory footprint of the entire grid, i.e. including all nodes and blind data
3717  __hostdev__ uint64_t gridSize() const { return DataType::mGridSize; }
3718 
3719  /// @brief Return index of this grid in the buffer
3720  __hostdev__ uint32_t gridIndex() const { return DataType::mGridIndex; }
3721 
3722  /// @brief Return total number of grids in the buffer
3723  __hostdev__ uint32_t gridCount() const { return DataType::mGridCount; }
3724 
3725  /// @brief @brief Return the total number of values indexed by this IndexGrid
3726  ///
3727  /// @note This method is only defined for IndexGrid = NanoGrid<ValueIndex || ValueOnIndex || ValueIndexMask || ValueOnIndexMask>
3728  template<typename T = BuildType>
3729  __hostdev__ typename enable_if<BuildTraits<T>::is_index, const uint64_t&>::type
3730  valueCount() const { return DataType::mData1; }
3731 
3732  /// @brief @brief Return the total number of points indexed by this PointGrid
3733  ///
3734  /// @note This method is only defined for PointGrid = NanoGrid<Point>
3735  template<typename T = BuildType>
3736  __hostdev__ typename enable_if<is_same<T, Point>::value, const uint64_t&>::type
3737  pointCount() const { return DataType::mData1; }
3738 
3739  /// @brief Return a const reference to the tree
3740  __hostdev__ const TreeT& tree() const { return *reinterpret_cast<const TreeT*>(this->treePtr()); }
3741 
3742  /// @brief Return a non-const reference to the tree
3743  __hostdev__ TreeT& tree() { return *reinterpret_cast<TreeT*>(this->treePtr()); }
3744 
3745  /// @brief Return a new instance of a ReadAccessor used to access values in this grid
3746  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->tree().root()); }
3747 
3748  /// @brief Return a const reference to the size of a voxel in world units
3749  __hostdev__ const Vec3d& voxelSize() const { return DataType::mVoxelSize; }
3750 
3751  /// @brief Return a const reference to the Map for this grid
3752  __hostdev__ const Map& map() const { return DataType::mMap; }
3753 
3754  /// @brief world to index space transformation
3755  template<typename Vec3T>
3756  __hostdev__ Vec3T worldToIndex(const Vec3T& xyz) const { return this->applyInverseMap(xyz); }
3757 
3758  /// @brief index to world space transformation
3759  template<typename Vec3T>
3760  __hostdev__ Vec3T indexToWorld(const Vec3T& xyz) const { return this->applyMap(xyz); }
3761 
3762  /// @brief transformation from index space direction to world space direction
3763  /// @warning assumes dir to be normalized
3764  template<typename Vec3T>
3765  __hostdev__ Vec3T indexToWorldDir(const Vec3T& dir) const { return this->applyJacobian(dir); }
3766 
3767  /// @brief transformation from world space direction to index space direction
3768  /// @warning assumes dir to be normalized
3769  template<typename Vec3T>
3770  __hostdev__ Vec3T worldToIndexDir(const Vec3T& dir) const { return this->applyInverseJacobian(dir); }
3771 
3772  /// @brief transform the gradient from index space to world space.
3773  /// @details Applies the inverse jacobian transform map.
3774  template<typename Vec3T>
3775  __hostdev__ Vec3T indexToWorldGrad(const Vec3T& grad) const { return this->applyIJT(grad); }
3776 
3777  /// @brief world to index space transformation
3778  template<typename Vec3T>
3779  __hostdev__ Vec3T worldToIndexF(const Vec3T& xyz) const { return this->applyInverseMapF(xyz); }
3780 
3781  /// @brief index to world space transformation
3782  template<typename Vec3T>
3783  __hostdev__ Vec3T indexToWorldF(const Vec3T& xyz) const { return this->applyMapF(xyz); }
3784 
3785  /// @brief transformation from index space direction to world space direction
3786  /// @warning assumes dir to be normalized
3787  template<typename Vec3T>
3788  __hostdev__ Vec3T indexToWorldDirF(const Vec3T& dir) const { return this->applyJacobianF(dir); }
3789 
3790  /// @brief transformation from world space direction to index space direction
3791  /// @warning assumes dir to be normalized
3792  template<typename Vec3T>
3793  __hostdev__ Vec3T worldToIndexDirF(const Vec3T& dir) const { return this->applyInverseJacobianF(dir); }
3794 
3795  /// @brief Transforms the gradient from index space to world space.
3796  /// @details Applies the inverse jacobian transform map.
3797  template<typename Vec3T>
3798  __hostdev__ Vec3T indexToWorldGradF(const Vec3T& grad) const { return DataType::applyIJTF(grad); }
3799 
3800  /// @brief Computes a AABB of active values in world space
3801  //__hostdev__ const BBox<Vec3d>& worldBBox() const { return DataType::mWorldBBox; }
3802 
3803  /// @brief Computes a AABB of active values in index space
3804  ///
3805  /// @note This method is returning a floating point bounding box and not a CoordBBox. This makes
3806  /// it more useful for clipping rays.
3807  //__hostdev__ const BBox<CoordType>& indexBBox() const { return this->tree().bbox(); }
3808 
3809  /// @brief Return the total number of active voxels in this tree.
3810  __hostdev__ uint64_t activeVoxelCount() const { return this->tree().activeVoxelCount(); }
3811 
3812  /// @brief Methods related to the classification of this grid
3813  __hostdev__ bool isValid() const { return DataType::isValid(); }
3814  __hostdev__ const GridType& gridType() const { return DataType::mGridType; }
3815  __hostdev__ const GridClass& gridClass() const { return DataType::mGridClass; }
3816  __hostdev__ bool isLevelSet() const { return DataType::mGridClass == GridClass::LevelSet; }
3817  __hostdev__ bool isFogVolume() const { return DataType::mGridClass == GridClass::FogVolume; }
3818  __hostdev__ bool isStaggered() const { return DataType::mGridClass == GridClass::Staggered; }
3819  __hostdev__ bool isPointIndex() const { return DataType::mGridClass == GridClass::PointIndex; }
3820  __hostdev__ bool isGridIndex() const { return DataType::mGridClass == GridClass::IndexGrid; }
3821  __hostdev__ bool isPointData() const { return DataType::mGridClass == GridClass::PointData; }
3822  __hostdev__ bool isMask() const { return DataType::mGridClass == GridClass::Topology; }
3823  __hostdev__ bool isUnknown() const { return DataType::mGridClass == GridClass::Unknown; }
3824  __hostdev__ bool hasMinMax() const { return DataType::mFlags.isMaskOn(GridFlags::HasMinMax); }
3825  __hostdev__ bool hasBBox() const { return DataType::mFlags.isMaskOn(GridFlags::HasBBox); }
3830 
3831  /// @brief return true if the specified node type is layed out breadth-first in memory and has a fixed size.
3832  /// This allows for sequential access to the nodes.
3833  template<typename NodeT>
3834  __hostdev__ bool isSequential() const { return NodeT::FIXED_SIZE && this->isBreadthFirst(); }
3835 
3836  /// @brief return true if the specified node level is layed out breadth-first in memory and has a fixed size.
3837  /// This allows for sequential access to the nodes.
3838  template<int LEVEL>
3839  __hostdev__ bool isSequential() const { return NodeTrait<TreeT, LEVEL>::type::FIXED_SIZE && this->isBreadthFirst(); }
3840 
3841  /// @brief return true if nodes at all levels can safely be accessed with simple linear offsets
3842  __hostdev__ bool isSequential() const { return UpperNodeType::FIXED_SIZE && LowerNodeType::FIXED_SIZE && LeafNodeType::FIXED_SIZE && this->isBreadthFirst(); }
3843 
3844  /// @brief Return a c-string with the name of this grid
3845  __hostdev__ const char* gridName() const { return DataType::gridName(); }
3846 
3847  /// @brief Return a c-string with the name of this grid, truncated to 255 characters
3848  __hostdev__ const char* shortGridName() const { return DataType::mGridName; }
3849 
3850  /// @brief Return checksum of the grid buffer.
3851  __hostdev__ uint64_t checksum() const { return DataType::mChecksum; }
3852 
3853  /// @brief Return true if this grid is empty, i.e. contains no values or nodes.
3854  //__hostdev__ bool isEmpty() const { return this->tree().isEmpty(); }
3855 
3856  /// @brief Return the count of blind-data encoded in this grid
3857  __hostdev__ uint32_t blindDataCount() const { return DataType::mBlindMetadataCount; }
3858 
3859  /// @brief Return the index of the first blind data with specified name if found, otherwise -1.
3860  __hostdev__ int findBlindData(const char* name) const;
3861 
3862  /// @brief Return the index of the first blind data with specified semantic if found, otherwise -1.
3863  __hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const;
3864 
3865  /// @brief Returns a const pointer to the blindData at the specified linear offset.
3866  ///
3867  /// @warning Pointer might be NULL and the linear offset is assumed to be in the valid range
3868  // this method is deprecated !!!!
3869  __hostdev__ const void* blindData(uint32_t n) const
3870  {
3871  printf("\nnanovdb::Grid::blindData is unsafe and hence deprecated! Please use nanovdb::Grid::getBlindData instead.\n\n");
3872  NANOVDB_ASSERT(n < DataType::mBlindMetadataCount);
3873  return this->blindMetaData(n).blindData();
3874  }
3875 
3876  template <typename BlindDataT>
3877  __hostdev__ const BlindDataT* getBlindData(uint32_t n) const
3878  {
3879  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
3880  return this->blindMetaData(n).template getBlindData<BlindDataT>();// NULL if mismatching BlindDataT
3881  }
3882 
3883  template <typename BlindDataT>
3884  __hostdev__ BlindDataT* getBlindData(uint32_t n)
3885  {
3886  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
3887  return const_cast<BlindDataT*>(this->blindMetaData(n).template getBlindData<BlindDataT>());// NULL if mismatching BlindDataT
3888  }
3889 
3890  __hostdev__ const GridBlindMetaData& blindMetaData(uint32_t n) const { return *DataType::blindMetaData(n); }
3891 
3892 private:
3893  static_assert(sizeof(GridData) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(GridData) is misaligned");
3894 }; // Class Grid
3895 
3896 template<typename TreeT>
3898 {
3899  for (uint32_t i = 0, n = this->blindDataCount(); i < n; ++i) {
3900  if (this->blindMetaData(i).mSemantic == semantic)
3901  return int(i);
3902  }
3903  return -1;
3904 }
3905 
3906 template<typename TreeT>
3908 {
3909  auto test = [&](int n) {
3910  const char* str = this->blindMetaData(n).mName;
3911  for (int i = 0; i < GridBlindMetaData::MaxNameSize; ++i) {
3912  if (name[i] != str[i])
3913  return false;
3914  if (name[i] == '\0' && str[i] == '\0')
3915  return true;
3916  }
3917  return true; // all len characters matched
3918  };
3919  for (int i = 0, n = this->blindDataCount(); i < n; ++i)
3920  if (test(i))
3921  return i;
3922  return -1;
3923 }
3924 
3925 // ----------------------------> Tree <--------------------------------------
3926 
3927 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) TreeData
3928 { // sizeof(TreeData) == 64B
3929  int64_t mNodeOffset[4];// 32B, byte offset from this tree to first leaf, lower, upper and root node. A zero offset means no node exists
3930  uint32_t mNodeCount[3]; // 12B, total number of nodes of type: leaf, lower internal, upper internal
3931  uint32_t mTileCount[3]; // 12B, total number of active tile values at the lower internal, upper internal and root node levels
3932  uint64_t mVoxelCount; // 8B, total number of active voxels in the root and all its child nodes.
3933  // No padding since it's always 32B aligned
3934  __hostdev__ TreeData& operator=(const TreeData& other)
3935  {
3936  static_assert(8 * 8 == sizeof(TreeData), "TreeData has unexpected size");
3937  memcpy64(this, &other, 8);
3938  return *this;
3939  }
3940  __hostdev__ void setRoot(const void* root) {mNodeOffset[3] = root ? PtrDiff(root, this) : 0;}
3941  __hostdev__ uint8_t* getRoot() { return mNodeOffset[3] ? PtrAdd<uint8_t>(this, mNodeOffset[3]) : nullptr; }
3942  __hostdev__ const uint8_t* getRoot() const { return mNodeOffset[3] ? PtrAdd<uint8_t>(this, mNodeOffset[3]) : nullptr; }
3943 
3944  template<typename NodeT>
3945  __hostdev__ void setFirstNode(const NodeT* node) {mNodeOffset[NodeT::LEVEL] = node ? PtrDiff(node, this) : 0;}
3946 
3947  __hostdev__ bool isEmpty() const {return mNodeOffset[3] ? *PtrAdd<uint32_t>(this, mNodeOffset[3] + sizeof(BBox<Coord>)) == 0 : true;}
3948 
3949  /// @brief Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
3950  __hostdev__ CoordBBox bbox() const {return mNodeOffset[3] ? *PtrAdd<CoordBBox>(this, mNodeOffset[3]) : CoordBBox();}
3951 
3952  /// @brief return true if RootData is layout out immediately after TreeData in memory
3953  __hostdev__ bool isRootNext() const {return mNodeOffset[3] ? mNodeOffset[3] == sizeof(TreeData) : false; }
3954 };// TreeData
3955 
3956 // ----------------------------> GridTree <--------------------------------------
3957 
3958 /// @brief defines a tree type from a grid type while preserving constness
3959 template<typename GridT>
3960 struct GridTree
3961 {
3962  using Type = typename GridT::TreeType;
3963  using type = typename GridT::TreeType;
3964 };
3965 template<typename GridT>
3966 struct GridTree<const GridT>
3967 {
3968  using Type = const typename GridT::TreeType;
3969  using type = const typename GridT::TreeType;
3970 };
3971 
3972 // ----------------------------> Tree <--------------------------------------
3973 
3974 /// @brief VDB Tree, which is a thin wrapper around a RootNode.
3975 template<typename RootT>
3976 class Tree : public TreeData
3977 {
3978  static_assert(RootT::LEVEL == 3, "Tree depth is not supported");
3979  static_assert(RootT::ChildNodeType::LOG2DIM == 5, "Tree configuration is not supported");
3980  static_assert(RootT::ChildNodeType::ChildNodeType::LOG2DIM == 4, "Tree configuration is not supported");
3981  static_assert(RootT::LeafNodeType::LOG2DIM == 3, "Tree configuration is not supported");
3982 
3983 public:
3984  using DataType = TreeData;
3985  using RootType = RootT;
3986  using RootNodeType = RootT;
3987  using UpperNodeType = typename RootNodeType::ChildNodeType;
3988  using LowerNodeType = typename UpperNodeType::ChildNodeType;
3989  using LeafNodeType = typename RootType::LeafNodeType;
3990  using ValueType = typename RootT::ValueType;
3991  using BuildType = typename RootT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3992  using CoordType = typename RootT::CoordType;
3994 
3995  using Node3 = RootT;
3996  using Node2 = typename RootT::ChildNodeType;
3997  using Node1 = typename Node2::ChildNodeType;
3999 
4000  /// @brief This class cannot be constructed or deleted
4001  Tree() = delete;
4002  Tree(const Tree&) = delete;
4003  Tree& operator=(const Tree&) = delete;
4004  ~Tree() = delete;
4005 
4006  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
4007 
4008  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
4009 
4010  /// @brief return memory usage in bytes for the class
4011  __hostdev__ static uint64_t memUsage() { return sizeof(DataType); }
4012 
4014  {
4015  RootT* ptr = reinterpret_cast<RootT*>(DataType::getRoot());
4016  NANOVDB_ASSERT(ptr);
4017  return *ptr;
4018  }
4019 
4020  __hostdev__ const RootT& root() const
4021  {
4022  const RootT* ptr = reinterpret_cast<const RootT*>(DataType::getRoot());
4023  NANOVDB_ASSERT(ptr);
4024  return *ptr;
4025  }
4026 
4027  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->root()); }
4028 
4029  /// @brief Return the value of the given voxel (regardless of state or location in the tree.)
4030  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->root().getValue(ijk); }
4031  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->root().getValue(CoordType(i, j, k)); }
4032 
4033  /// @brief Return the active state of the given voxel (regardless of state or location in the tree.)
4034  __hostdev__ bool isActive(const CoordType& ijk) const { return this->root().isActive(ijk); }
4035 
4036  /// @brief Return true if this tree is empty, i.e. contains no values or nodes
4037  //__hostdev__ bool isEmpty() const { return this->root().isEmpty(); }
4038 
4039  /// @brief Combines the previous two methods in a single call
4040  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->root().probeValue(ijk, v); }
4041 
4042  /// @brief Return a const reference to the background value.
4043  __hostdev__ const ValueType& background() const { return this->root().background(); }
4044 
4045  /// @brief Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree
4046  __hostdev__ void extrema(ValueType& min, ValueType& max) const;
4047 
4048  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
4049  //__hostdev__ const BBox<CoordType>& bbox() const { return this->root().bbox(); }
4050 
4051  /// @brief Return the total number of active voxels in this tree.
4052  __hostdev__ uint64_t activeVoxelCount() const { return DataType::mVoxelCount; }
4053 
4054  /// @brief Return the total number of active tiles at the specified level of the tree.
4055  ///
4056  /// @details level = 1,2,3 corresponds to active tile count in lower internal nodes, upper
4057  /// internal nodes, and the root level. Note active values at the leaf level are
4058  /// referred to as active voxels (see activeVoxelCount defined above).
4059  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const
4060  {
4061  NANOVDB_ASSERT(level > 0 && level <= 3); // 1, 2, or 3
4062  return DataType::mTileCount[level - 1];
4063  }
4064 
4065  template<typename NodeT>
4066  __hostdev__ uint32_t nodeCount() const
4067  {
4068  static_assert(NodeT::LEVEL < 3, "Invalid NodeT");
4069  return DataType::mNodeCount[NodeT::LEVEL];
4070  }
4071 
4072  __hostdev__ uint32_t nodeCount(int level) const
4073  {
4074  NANOVDB_ASSERT(level < 3);
4075  return DataType::mNodeCount[level];
4076  }
4077 
4078  __hostdev__ uint32_t totalNodeCount() const
4079  {
4080  return DataType::mNodeCount[0] + DataType::mNodeCount[1] + DataType::mNodeCount[2];
4081  }
4082 
4083  /// @brief return a pointer to the first node of the specified type
4084  ///
4085  /// @warning Note it may return NULL if no nodes exist
4086  template<typename NodeT>
4088  {
4089  const int64_t offset = DataType::mNodeOffset[NodeT::LEVEL];
4090  return offset ? PtrAdd<NodeT>(this, offset) : nullptr;
4091  }
4092 
4093  /// @brief return a const pointer to the first node of the specified type
4094  ///
4095  /// @warning Note it may return NULL if no nodes exist
4096  template<typename NodeT>
4097  __hostdev__ const NodeT* getFirstNode() const
4098  {
4099  const int64_t offset = DataType::mNodeOffset[NodeT::LEVEL];
4100  return offset ? PtrAdd<NodeT>(this, offset) : nullptr;
4101  }
4102 
4103  /// @brief return a pointer to the first node at the specified level
4104  ///
4105  /// @warning Note it may return NULL if no nodes exist
4106  template<int LEVEL>
4109  {
4110  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
4111  }
4112 
4113  /// @brief return a const pointer to the first node of the specified level
4114  ///
4115  /// @warning Note it may return NULL if no nodes exist
4116  template<int LEVEL>
4119  {
4120  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
4121  }
4122 
4123  /// @brief Template specializations of getFirstNode
4124  __hostdev__ LeafNodeType* getFirstLeaf() { return this->getFirstNode<LeafNodeType>(); }
4125  __hostdev__ const LeafNodeType* getFirstLeaf() const { return this->getFirstNode<LeafNodeType>(); }
4126  __hostdev__ typename NodeTrait<RootT, 1>::type* getFirstLower() { return this->getFirstNode<1>(); }
4127  __hostdev__ const typename NodeTrait<RootT, 1>::type* getFirstLower() const { return this->getFirstNode<1>(); }
4128  __hostdev__ typename NodeTrait<RootT, 2>::type* getFirstUpper() { return this->getFirstNode<2>(); }
4129  __hostdev__ const typename NodeTrait<RootT, 2>::type* getFirstUpper() const { return this->getFirstNode<2>(); }
4130 
4131  template<typename OpT, typename... ArgsT>
4132  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4133  {
4134  return this->root().template get<OpT>(ijk, args...);
4135  }
4136 
4137  template<typename OpT, typename... ArgsT>
4138  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
4139  {
4140  return this->root().template set<OpT>(ijk, args...);
4141  }
4142 
4143 private:
4144  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(TreeData) is misaligned");
4145 
4146 }; // Tree class
4147 
4148 template<typename RootT>
4150 {
4151  min = this->root().minimum();
4152  max = this->root().maximum();
4153 }
4154 
4155 // --------------------------> RootData <------------------------------------
4156 
4157 /// @brief Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode)
4158 ///
4159 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
4160 template<typename ChildT>
4161 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) RootData
4162 {
4163  using ValueT = typename ChildT::ValueType;
4164  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
4165  using CoordT = typename ChildT::CoordType;
4166  using StatsT = typename ChildT::FloatType;
4167  static constexpr bool FIXED_SIZE = false;
4168 
4169  /// @brief Return a key based on the coordinates of a voxel
4170 #ifdef NANOVDB_USE_SINGLE_ROOT_KEY
4171  using KeyT = uint64_t;
4172  template<typename CoordType>
4173  __hostdev__ static KeyT CoordToKey(const CoordType& ijk)
4174  {
4175  static_assert(sizeof(CoordT) == sizeof(CoordType), "Mismatching sizeof");
4176  static_assert(32 - ChildT::TOTAL <= 21, "Cannot use 64 bit root keys");
4177  return (KeyT(uint32_t(ijk[2]) >> ChildT::TOTAL)) | // z is the lower 21 bits
4178  (KeyT(uint32_t(ijk[1]) >> ChildT::TOTAL) << 21) | // y is the middle 21 bits
4179  (KeyT(uint32_t(ijk[0]) >> ChildT::TOTAL) << 42); // x is the upper 21 bits
4180  }
4181  __hostdev__ static CoordT KeyToCoord(const KeyT& key)
4182  {
4183  static constexpr uint64_t MASK = (1u << 21) - 1; // used to mask out 21 lower bits
4184  return CoordT(((key >> 42) & MASK) << ChildT::TOTAL, // x are the upper 21 bits
4185  ((key >> 21) & MASK) << ChildT::TOTAL, // y are the middle 21 bits
4186  (key & MASK) << ChildT::TOTAL); // z are the lower 21 bits
4187  }
4188 #else
4189  using KeyT = CoordT;
4190  __hostdev__ static KeyT CoordToKey(const CoordT& ijk) { return ijk & ~ChildT::MASK; }
4191  __hostdev__ static CoordT KeyToCoord(const KeyT& key) { return key; }
4192 #endif
4193  BBox<CoordT> mBBox; // 24B. AABB of active values in index space.
4194  uint32_t mTableSize; // 4B. number of tiles and child pointers in the root node
4195 
4196  ValueT mBackground; // background value, i.e. value of any unset voxel
4197  ValueT mMinimum; // typically 4B, minimum of all the active values
4198  ValueT mMaximum; // typically 4B, maximum of all the active values
4199  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
4200  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
4201 
4202  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
4203  ///
4204  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
4205  __hostdev__ static constexpr uint32_t padding()
4206  {
4207  return sizeof(RootData) - (24 + 4 + 3 * sizeof(ValueT) + 2 * sizeof(StatsT));
4208  }
4209 
4210  struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) Tile
4211  {
4212  template<typename CoordType>
4213  __hostdev__ void setChild(const CoordType& k, const void* ptr, const RootData* data)
4214  {
4215  key = CoordToKey(k);
4216  state = false;
4217  child = PtrDiff(ptr, data);
4218  }
4219  template<typename CoordType, typename ValueType>
4220  __hostdev__ void setValue(const CoordType& k, bool s, const ValueType& v)
4221  {
4222  key = CoordToKey(k);
4223  state = s;
4224  value = v;
4225  child = 0;
4226  }
4227  __hostdev__ bool isChild() const { return child != 0; }
4228  __hostdev__ bool isValue() const { return child == 0; }
4229  __hostdev__ bool isActive() const { return child == 0 && state; }
4230  __hostdev__ CoordT origin() const { return KeyToCoord(key); }
4231  KeyT key; // NANOVDB_USE_SINGLE_ROOT_KEY ? 8B : 12B
4232  int64_t child; // 8B. signed byte offset from this node to the child node. 0 means it is a constant tile, so use value.
4233  uint32_t state; // 4B. state of tile value
4234  ValueT value; // value of tile (i.e. no child node)
4235  }; // Tile
4236 
4237  /// @brief Returns a non-const reference to the tile at the specified linear offset.
4238  ///
4239  /// @warning The linear offset is assumed to be in the valid range
4240  __hostdev__ const Tile* tile(uint32_t n) const
4241  {
4242  NANOVDB_ASSERT(n < mTableSize);
4243  return reinterpret_cast<const Tile*>(this + 1) + n;
4244  }
4245  __hostdev__ Tile* tile(uint32_t n)
4246  {
4247  NANOVDB_ASSERT(n < mTableSize);
4248  return reinterpret_cast<Tile*>(this + 1) + n;
4249  }
4250 
4251  __hostdev__ Tile* probeTile(const CoordT& ijk)
4252  {
4253 #if 1 // switch between linear and binary seach
4254  const auto key = CoordToKey(ijk);
4255  for (Tile *p = reinterpret_cast<Tile*>(this + 1), *q = p + mTableSize; p < q; ++p)
4256  if (p->key == key)
4257  return p;
4258  return nullptr;
4259 #else // do not enable binary search if tiles are not guaranteed to be sorted!!!!!!
4260  int32_t low = 0, high = mTableSize; // low is inclusive and high is exclusive
4261  while (low != high) {
4262  int mid = low + ((high - low) >> 1);
4263  const Tile* tile = &tiles[mid];
4264  if (tile->key == key) {
4265  return tile;
4266  } else if (tile->key < key) {
4267  low = mid + 1;
4268  } else {
4269  high = mid;
4270  }
4271  }
4272  return nullptr;
4273 #endif
4274  }
4275 
4276  __hostdev__ inline const Tile* probeTile(const CoordT& ijk) const
4277  {
4278  return const_cast<RootData*>(this)->probeTile(ijk);
4279  }
4280 
4281  /// @brief Returns a const reference to the child node in the specified tile.
4282  ///
4283  /// @warning A child node is assumed to exist in the specified tile
4284  __hostdev__ ChildT* getChild(const Tile* tile)
4285  {
4286  NANOVDB_ASSERT(tile->child);
4287  return PtrAdd<ChildT>(this, tile->child);
4288  }
4289  __hostdev__ const ChildT* getChild(const Tile* tile) const
4290  {
4291  NANOVDB_ASSERT(tile->child);
4292  return PtrAdd<ChildT>(this, tile->child);
4293  }
4294 
4295  __hostdev__ const ValueT& getMin() const { return mMinimum; }
4296  __hostdev__ const ValueT& getMax() const { return mMaximum; }
4297  __hostdev__ const StatsT& average() const { return mAverage; }
4298  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
4299 
4300  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
4301  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
4302  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
4303  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
4304 
4305  /// @brief This class cannot be constructed or deleted
4306  RootData() = delete;
4307  RootData(const RootData&) = delete;
4308  RootData& operator=(const RootData&) = delete;
4309  ~RootData() = delete;
4310 }; // RootData
4311 
4312 // --------------------------> RootNode <------------------------------------
4313 
4314 /// @brief Top-most node of the VDB tree structure.
4315 template<typename ChildT>
4316 class RootNode : public RootData<ChildT>
4317 {
4318 public:
4319  using DataType = RootData<ChildT>;
4320  using ChildNodeType = ChildT;
4321  using RootType = RootNode<ChildT>; // this allows RootNode to behave like a Tree
4323  using UpperNodeType = ChildT;
4324  using LowerNodeType = typename UpperNodeType::ChildNodeType;
4325  using LeafNodeType = typename ChildT::LeafNodeType;
4326  using ValueType = typename DataType::ValueT;
4327  using FloatType = typename DataType::StatsT;
4328  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
4329 
4330  using CoordType = typename ChildT::CoordType;
4333  using Tile = typename DataType::Tile;
4334  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
4335 
4336  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
4337 
4338  template<typename RootT>
4339  class BaseIter
4340  {
4341  protected:
4345  uint32_t mPos, mSize;
4346  __hostdev__ BaseIter(DataT* data = nullptr, uint32_t n = 0)
4347  : mData(data)
4348  , mPos(0)
4349  , mSize(n)
4350  {
4351  }
4352 
4353  public:
4354  __hostdev__ operator bool() const { return mPos < mSize; }
4355  __hostdev__ uint32_t pos() const { return mPos; }
4356  __hostdev__ void next() { ++mPos; }
4357  __hostdev__ TileT* tile() const { return mData->tile(mPos); }
4359  {
4360  NANOVDB_ASSERT(*this);
4361  return this->tile()->origin();
4362  }
4364  {
4365  NANOVDB_ASSERT(*this);
4366  return this->tile()->origin();
4367  }
4368  }; // Member class BaseIter
4369 
4370  template<typename RootT>
4371  class ChildIter : public BaseIter<RootT>
4372  {
4373  static_assert(is_same<typename remove_const<RootT>::type, RootNode>::value, "Invalid RootT");
4374  using BaseT = BaseIter<RootT>;
4375  using NodeT = typename match_const<ChildT, RootT>::type;
4376 
4377  public:
4379  : BaseT()
4380  {
4381  }
4382  __hostdev__ ChildIter(RootT* parent)
4383  : BaseT(parent->data(), parent->tileCount())
4384  {
4385  NANOVDB_ASSERT(BaseT::mData);
4386  while (*this && !this->tile()->isChild())
4387  this->next();
4388  }
4389  __hostdev__ NodeT& operator*() const
4390  {
4391  NANOVDB_ASSERT(*this);
4392  return *BaseT::mData->getChild(this->tile());
4393  }
4394  __hostdev__ NodeT* operator->() const
4395  {
4396  NANOVDB_ASSERT(*this);
4397  return BaseT::mData->getChild(this->tile());
4398  }
4400  {
4401  NANOVDB_ASSERT(BaseT::mData);
4402  this->next();
4403  while (*this && this->tile()->isValue())
4404  this->next();
4405  return *this;
4406  }
4408  {
4409  auto tmp = *this;
4410  ++(*this);
4411  return tmp;
4412  }
4413  }; // Member class ChildIter
4414 
4417 
4420 
4421  template<typename RootT>
4422  class ValueIter : public BaseIter<RootT>
4423  {
4424  using BaseT = BaseIter<RootT>;
4425 
4426  public:
4428  : BaseT()
4429  {
4430  }
4431  __hostdev__ ValueIter(RootT* parent)
4432  : BaseT(parent->data(), parent->tileCount())
4433  {
4434  NANOVDB_ASSERT(BaseT::mData);
4435  while (*this && this->tile()->isChild())
4436  this->next();
4437  }
4439  {
4440  NANOVDB_ASSERT(*this);
4441  return this->tile()->value;
4442  }
4443  __hostdev__ bool isActive() const
4444  {
4445  NANOVDB_ASSERT(*this);
4446  return this->tile()->state;
4447  }
4449  {
4450  NANOVDB_ASSERT(BaseT::mData);
4451  this->next();
4452  while (*this && this->tile()->isChild())
4453  this->next();
4454  return *this;
4455  }
4457  {
4458  auto tmp = *this;
4459  ++(*this);
4460  return tmp;
4461  }
4462  }; // Member class ValueIter
4463 
4466 
4469 
4470  template<typename RootT>
4471  class ValueOnIter : public BaseIter<RootT>
4472  {
4473  using BaseT = BaseIter<RootT>;
4474 
4475  public:
4477  : BaseT()
4478  {
4479  }
4480  __hostdev__ ValueOnIter(RootT* parent)
4481  : BaseT(parent->data(), parent->tileCount())
4482  {
4483  NANOVDB_ASSERT(BaseT::mData);
4484  while (*this && !this->tile()->isActive())
4485  ++BaseT::mPos;
4486  }
4488  {
4489  NANOVDB_ASSERT(*this);
4490  return this->tile()->value;
4491  }
4493  {
4494  NANOVDB_ASSERT(BaseT::mData);
4495  this->next();
4496  while (*this && !this->tile()->isActive())
4497  this->next();
4498  return *this;
4499  }
4501  {
4502  auto tmp = *this;
4503  ++(*this);
4504  return tmp;
4505  }
4506  }; // Member class ValueOnIter
4507 
4510 
4513 
4514  template<typename RootT>
4515  class DenseIter : public BaseIter<RootT>
4516  {
4517  using BaseT = BaseIter<RootT>;
4518  using NodeT = typename match_const<ChildT, RootT>::type;
4519 
4520  public:
4522  : BaseT()
4523  {
4524  }
4525  __hostdev__ DenseIter(RootT* parent)
4526  : BaseT(parent->data(), parent->tileCount())
4527  {
4528  NANOVDB_ASSERT(BaseT::mData);
4529  }
4531  {
4532  NANOVDB_ASSERT(*this);
4533  NodeT* child = nullptr;
4534  auto* t = this->tile();
4535  if (t->isChild()) {
4536  child = BaseT::mData->getChild(t);
4537  } else {
4538  value = t->value;
4539  }
4540  return child;
4541  }
4542  __hostdev__ bool isValueOn() const
4543  {
4544  NANOVDB_ASSERT(*this);
4545  return this->tile()->state;
4546  }
4548  {
4549  NANOVDB_ASSERT(BaseT::mData);
4550  this->next();
4551  return *this;
4552  }
4554  {
4555  auto tmp = *this;
4556  ++(*this);
4557  return tmp;
4558  }
4559  }; // Member class DenseIter
4560 
4563 
4567 
4568  /// @brief This class cannot be constructed or deleted
4569  RootNode() = delete;
4570  RootNode(const RootNode&) = delete;
4571  RootNode& operator=(const RootNode&) = delete;
4572  ~RootNode() = delete;
4573 
4575 
4576  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
4577 
4578  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
4579 
4580  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
4581  __hostdev__ const BBoxType& bbox() const { return DataType::mBBox; }
4582 
4583  /// @brief Return the total number of active voxels in the root and all its child nodes.
4584 
4585  /// @brief Return a const reference to the background value, i.e. the value associated with
4586  /// any coordinate location that has not been set explicitly.
4587  __hostdev__ const ValueType& background() const { return DataType::mBackground; }
4588 
4589  /// @brief Return the number of tiles encoded in this root node
4590  __hostdev__ const uint32_t& tileCount() const { return DataType::mTableSize; }
4591  __hostdev__ const uint32_t& getTableSize() const { return DataType::mTableSize; }
4592 
4593  /// @brief Return a const reference to the minimum active value encoded in this root node and any of its child nodes
4594  __hostdev__ const ValueType& minimum() const { return DataType::mMinimum; }
4595 
4596  /// @brief Return a const reference to the maximum active value encoded in this root node and any of its child nodes
4597  __hostdev__ const ValueType& maximum() const { return DataType::mMaximum; }
4598 
4599  /// @brief Return a const reference to the average of all the active values encoded in this root node and any of its child nodes
4600  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
4601 
4602  /// @brief Return the variance of all the active values encoded in this root node and any of its child nodes
4603  __hostdev__ FloatType variance() const { return Pow2(DataType::mStdDevi); }
4604 
4605  /// @brief Return a const reference to the standard deviation of all the active values encoded in this root node and any of its child nodes
4606  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
4607 
4608  /// @brief Return the expected memory footprint in bytes with the specified number of tiles
4609  __hostdev__ static uint64_t memUsage(uint32_t tableSize) { return sizeof(RootNode) + tableSize * sizeof(Tile); }
4610 
4611  /// @brief Return the actual memory footprint of this root node
4612  __hostdev__ uint64_t memUsage() const { return sizeof(RootNode) + DataType::mTableSize * sizeof(Tile); }
4613 
4614  /// @brief Return true if this RootNode is empty, i.e. contains no values or nodes
4615  __hostdev__ bool isEmpty() const { return DataType::mTableSize == uint32_t(0); }
4616 
4617 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
4618  /// @brief Return the value of the given voxel
4619  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
4620  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildType>>(CoordType(i, j, k)); }
4621  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
4622  /// @brief return the state and updates the value of the specified voxel
4623  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
4624  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
4625 #else // NANOVDB_NEW_ACCESSOR_METHODS
4626 
4627  /// @brief Return the value of the given voxel
4628  __hostdev__ ValueType getValue(const CoordType& ijk) const
4629  {
4630  if (const Tile* tile = DataType::probeTile(ijk)) {
4631  return tile->isChild() ? this->getChild(tile)->getValue(ijk) : tile->value;
4632  }
4633  return DataType::mBackground;
4634  }
4635  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->getValue(CoordType(i, j, k)); }
4636 
4637  __hostdev__ bool isActive(const CoordType& ijk) const
4638  {
4639  if (const Tile* tile = DataType::probeTile(ijk)) {
4640  return tile->isChild() ? this->getChild(tile)->isActive(ijk) : tile->state;
4641  }
4642  return false;
4643  }
4644 
4645  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
4646  {
4647  if (const Tile* tile = DataType::probeTile(ijk)) {
4648  if (tile->isChild()) {
4649  const auto* child = this->getChild(tile);
4650  return child->probeValue(ijk, v);
4651  }
4652  v = tile->value;
4653  return tile->state;
4654  }
4655  v = DataType::mBackground;
4656  return false;
4657  }
4658 
4659  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const
4660  {
4661  const Tile* tile = DataType::probeTile(ijk);
4662  if (tile && tile->isChild()) {
4663  const auto* child = this->getChild(tile);
4664  return child->probeLeaf(ijk);
4665  }
4666  return nullptr;
4667  }
4668 
4669 #endif // NANOVDB_NEW_ACCESSOR_METHODS
4670 
4672  {
4673  const Tile* tile = DataType::probeTile(ijk);
4674  return tile && tile->isChild() ? this->getChild(tile) : nullptr;
4675  }
4676 
4678  {
4679  const Tile* tile = DataType::probeTile(ijk);
4680  return tile && tile->isChild() ? this->getChild(tile) : nullptr;
4681  }
4682 
4683  template<typename OpT, typename... ArgsT>
4684  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4685  {
4686  if (const Tile* tile = this->probeTile(ijk)) {
4687  if (tile->isChild())
4688  return this->getChild(tile)->template get<OpT>(ijk, args...);
4689  return OpT::get(*tile, args...);
4690  }
4691  return OpT::get(*this, args...);
4692  }
4693 
4694  template<typename OpT, typename... ArgsT>
4695  __hostdev__ auto // occasionally fails with NVCC
4696 // __hostdev__ decltype(OpT::set(std::declval<Tile&>(), std::declval<ArgsT>()...))
4697  set(const CoordType& ijk, ArgsT&&... args)
4698  {
4699  if (Tile* tile = DataType::probeTile(ijk)) {
4700  if (tile->isChild())
4701  return this->getChild(tile)->template set<OpT>(ijk, args...);
4702  return OpT::set(*tile, args...);
4703  }
4704  return OpT::set(*this, args...);
4705  }
4706 
4707 private:
4708  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData) is misaligned");
4709  static_assert(sizeof(typename DataType::Tile) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData::Tile) is misaligned");
4710 
4711  template<typename, int, int, int>
4712  friend class ReadAccessor;
4713 
4714  template<typename>
4715  friend class Tree;
4716 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
4717  /// @brief Private method to return node information and update a ReadAccessor
4718  template<typename AccT>
4719  __hostdev__ typename AccT::NodeInfo getNodeInfoAndCache(const CoordType& ijk, const AccT& acc) const
4720  {
4721  using NodeInfoT = typename AccT::NodeInfo;
4722  if (const Tile* tile = this->probeTile(ijk)) {
4723  if (tile->isChild()) {
4724  const auto* child = this->getChild(tile);
4725  acc.insert(ijk, child);
4726  return child->getNodeInfoAndCache(ijk, acc);
4727  }
4728  return NodeInfoT{LEVEL, ChildT::dim(), tile->value, tile->value, tile->value, 0, tile->origin(), tile->origin() + CoordType(ChildT::DIM)};
4729  }
4730  return NodeInfoT{LEVEL, ChildT::dim(), this->minimum(), this->maximum(), this->average(), this->stdDeviation(), this->bbox()[0], this->bbox()[1]};
4731  }
4732 
4733  /// @brief Private method to return a voxel value and update a ReadAccessor
4734  template<typename AccT>
4735  __hostdev__ ValueType getValueAndCache(const CoordType& ijk, const AccT& acc) const
4736  {
4737  if (const Tile* tile = this->probeTile(ijk)) {
4738  if (tile->isChild()) {
4739  const auto* child = this->getChild(tile);
4740  acc.insert(ijk, child);
4741  return child->getValueAndCache(ijk, acc);
4742  }
4743  return tile->value;
4744  }
4745  return DataType::mBackground;
4746  }
4747 
4748  template<typename AccT>
4749  __hostdev__ bool isActiveAndCache(const CoordType& ijk, const AccT& acc) const
4750  {
4751  const Tile* tile = this->probeTile(ijk);
4752  if (tile && tile->isChild()) {
4753  const auto* child = this->getChild(tile);
4754  acc.insert(ijk, child);
4755  return child->isActiveAndCache(ijk, acc);
4756  }
4757  return false;
4758  }
4759 
4760  template<typename AccT>
4761  __hostdev__ bool probeValueAndCache(const CoordType& ijk, ValueType& v, const AccT& acc) const
4762  {
4763  if (const Tile* tile = this->probeTile(ijk)) {
4764  if (tile->isChild()) {
4765  const auto* child = this->getChild(tile);
4766  acc.insert(ijk, child);
4767  return child->probeValueAndCache(ijk, v, acc);
4768  }
4769  v = tile->value;
4770  return tile->state;
4771  }
4772  v = DataType::mBackground;
4773  return false;
4774  }
4775 
4776  template<typename AccT>
4777  __hostdev__ const LeafNodeType* probeLeafAndCache(const CoordType& ijk, const AccT& acc) const
4778  {
4779  const Tile* tile = this->probeTile(ijk);
4780  if (tile && tile->isChild()) {
4781  const auto* child = this->getChild(tile);
4782  acc.insert(ijk, child);
4783  return child->probeLeafAndCache(ijk, acc);
4784  }
4785  return nullptr;
4786  }
4787 #endif // NANOVDB_NEW_ACCESSOR_METHODS
4788 
4789  template<typename RayT, typename AccT>
4790  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
4791  {
4792  if (const Tile* tile = this->probeTile(ijk)) {
4793  if (tile->isChild()) {
4794  const auto* child = this->getChild(tile);
4795  acc.insert(ijk, child);
4796  return child->getDimAndCache(ijk, ray, acc);
4797  }
4798  return 1 << ChildT::TOTAL; //tile value
4799  }
4800  return ChildNodeType::dim(); // background
4801  }
4802 
4803  template<typename OpT, typename AccT, typename... ArgsT>
4804  //__hostdev__ decltype(OpT::get(std::declval<const Tile&>(), std::declval<ArgsT>()...))
4805  __hostdev__ auto
4806  getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
4807  {
4808  if (const Tile* tile = this->probeTile(ijk)) {
4809  if (tile->isChild()) {
4810  const ChildT* child = this->getChild(tile);
4811  acc.insert(ijk, child);
4812  return child->template getAndCache<OpT>(ijk, acc, args...);
4813  }
4814  return OpT::get(*tile, args...);
4815  }
4816  return OpT::get(*this, args...);
4817  }
4818 
4819  template<typename OpT, typename AccT, typename... ArgsT>
4820  __hostdev__ auto // occasionally fails with NVCC
4821 // __hostdev__ decltype(OpT::set(std::declval<Tile&>(), std::declval<ArgsT>()...))
4822  setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
4823  {
4824  if (Tile* tile = DataType::probeTile(ijk)) {
4825  if (tile->isChild()) {
4826  ChildT* child = this->getChild(tile);
4827  acc.insert(ijk, child);
4828  return child->template setAndCache<OpT>(ijk, acc, args...);
4829  }
4830  return OpT::set(*tile, args...);
4831  }
4832  return OpT::set(*this, args...);
4833  }
4834 
4835 }; // RootNode class
4836 
4837 // After the RootNode the memory layout is assumed to be the sorted Tiles
4838 
4839 // --------------------------> InternalNode <------------------------------------
4840 
4841 /// @brief Struct with all the member data of the InternalNode (useful during serialization of an openvdb InternalNode)
4842 ///
4843 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
4844 template<typename ChildT, uint32_t LOG2DIM>
4845 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) InternalData
4846 {
4847  using ValueT = typename ChildT::ValueType;
4848  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
4849  using StatsT = typename ChildT::FloatType;
4850  using CoordT = typename ChildT::CoordType;
4851  using MaskT = typename ChildT::template MaskType<LOG2DIM>;
4852  static constexpr bool FIXED_SIZE = true;
4853 
4854  union Tile
4855  {
4856  ValueT value;
4857  int64_t child; //signed 64 bit byte offset relative to this InternalData, i.e. child-pointer = Tile::child + this
4858  /// @brief This class cannot be constructed or deleted
4859  Tile() = delete;
4860  Tile(const Tile&) = delete;
4861  Tile& operator=(const Tile&) = delete;
4862  ~Tile() = delete;
4863  };
4864 
4865  BBox<CoordT> mBBox; // 24B. node bounding box. |
4866  uint64_t mFlags; // 8B. node flags. | 32B aligned
4867  MaskT mValueMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
4868  MaskT mChildMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
4869 
4870  ValueT mMinimum; // typically 4B
4871  ValueT mMaximum; // typically 4B
4872  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
4873  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
4874  // possible padding, e.g. 28 byte padding when ValueType = bool
4875 
4876  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
4877  ///
4878  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
4879  __hostdev__ static constexpr uint32_t padding()
4880  {
4881  return sizeof(InternalData) - (24u + 8u + 2 * (sizeof(MaskT) + sizeof(ValueT) + sizeof(StatsT)) + (1u << (3 * LOG2DIM)) * (sizeof(ValueT) > 8u ? sizeof(ValueT) : 8u));
4882  }
4883  alignas(32) Tile mTable[1u << (3 * LOG2DIM)]; // sizeof(ValueT) x (16*16*16 or 32*32*32)
4884 
4885  __hostdev__ static uint64_t memUsage() { return sizeof(InternalData); }
4886 
4887  __hostdev__ void setChild(uint32_t n, const void* ptr)
4888  {
4889  NANOVDB_ASSERT(mChildMask.isOn(n));
4890  mTable[n].child = PtrDiff(ptr, this);
4891  }
4892 
4893  template<typename ValueT>
4894  __hostdev__ void setValue(uint32_t n, const ValueT& v)
4895  {
4896  NANOVDB_ASSERT(!mChildMask.isOn(n));
4897  mTable[n].value = v;
4898  }
4899 
4900  /// @brief Returns a pointer to the child node at the specifed linear offset.
4901  __hostdev__ ChildT* getChild(uint32_t n)
4902  {
4903  NANOVDB_ASSERT(mChildMask.isOn(n));
4904  return PtrAdd<ChildT>(this, mTable[n].child);
4905  }
4906  __hostdev__ const ChildT* getChild(uint32_t n) const
4907  {
4908  NANOVDB_ASSERT(mChildMask.isOn(n));
4909  return PtrAdd<ChildT>(this, mTable[n].child);
4910  }
4911 
4912  __hostdev__ ValueT getValue(uint32_t n) const
4913  {
4914  NANOVDB_ASSERT(mChildMask.isOff(n));
4915  return mTable[n].value;
4916  }
4917 
4918  __hostdev__ bool isActive(uint32_t n) const
4919  {
4920  NANOVDB_ASSERT(mChildMask.isOff(n));
4921  return mValueMask.isOn(n);
4922  }
4923 
4924  __hostdev__ bool isChild(uint32_t n) const { return mChildMask.isOn(n); }
4925 
4926  template<typename T>
4927  __hostdev__ void setOrigin(const T& ijk) { mBBox[0] = ijk; }
4928 
4929  __hostdev__ const ValueT& getMin() const { return mMinimum; }
4930  __hostdev__ const ValueT& getMax() const { return mMaximum; }
4931  __hostdev__ const StatsT& average() const { return mAverage; }
4932  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
4933 
4934 #if defined(__GNUC__) && !defined(__APPLE__) && !defined(__llvm__)
4935 #pragma GCC diagnostic push
4936 #pragma GCC diagnostic ignored "-Wstringop-overflow"
4937 #endif
4938  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
4939  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
4940  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
4941  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
4942 #if defined(__GNUC__) && !defined(__APPLE__) && !defined(__llvm__)
4943 #pragma GCC diagnostic pop
4944 #endif
4945 
4946  /// @brief This class cannot be constructed or deleted
4947  InternalData() = delete;
4948  InternalData(const InternalData&) = delete;
4949  InternalData& operator=(const InternalData&) = delete;
4950  ~InternalData() = delete;
4951 }; // InternalData
4952 
4953 /// @brief Internal nodes of a VDB treedim(),
4954 template<typename ChildT, uint32_t Log2Dim = ChildT::LOG2DIM + 1>
4955 class InternalNode : public InternalData<ChildT, Log2Dim>
4956 {
4957 public:
4958  using DataType = InternalData<ChildT, Log2Dim>;
4959  using ValueType = typename DataType::ValueT;
4960  using FloatType = typename DataType::StatsT;
4961  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
4962  using LeafNodeType = typename ChildT::LeafNodeType;
4963  using ChildNodeType = ChildT;
4964  using CoordType = typename ChildT::CoordType;
4965  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
4966  template<uint32_t LOG2>
4967  using MaskType = typename ChildT::template MaskType<LOG2>;
4968  template<bool On>
4969  using MaskIterT = typename Mask<Log2Dim>::template Iterator<On>;
4970 
4971  static constexpr uint32_t LOG2DIM = Log2Dim;
4972  static constexpr uint32_t TOTAL = LOG2DIM + ChildT::TOTAL; // dimension in index space
4973  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
4974  static constexpr uint32_t SIZE = 1u << (3 * LOG2DIM); // number of tile values (or child pointers)
4975  static constexpr uint32_t MASK = (1u << TOTAL) - 1u;
4976  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
4977  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
4978 
4979  /// @brief Visits child nodes of this node only
4980  template <typename ParentT>
4981  class ChildIter : public MaskIterT<true>
4982  {
4983  static_assert(is_same<typename remove_const<ParentT>::type, InternalNode>::value, "Invalid ParentT");
4984  using BaseT = MaskIterT<true>;
4985  using NodeT = typename match_const<ChildT, ParentT>::type;
4986  ParentT* mParent;
4987 
4988  public:
4990  : BaseT()
4991  , mParent(nullptr)
4992  {
4993  }
4994  __hostdev__ ChildIter(ParentT* parent)
4995  : BaseT(parent->mChildMask.beginOn())
4996  , mParent(parent)
4997  {
4998  }
4999  ChildIter& operator=(const ChildIter&) = default;
5000  __hostdev__ NodeT& operator*() const
5001  {
5002  NANOVDB_ASSERT(*this);
5003  return *mParent->getChild(BaseT::pos());
5004  }
5005  __hostdev__ NodeT* operator->() const
5006  {
5007  NANOVDB_ASSERT(*this);
5008  return mParent->getChild(BaseT::pos());
5009  }
5011  {
5012  NANOVDB_ASSERT(*this);
5013  return (*this)->origin();
5014  }
5015  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
5016  }; // Member class ChildIter
5017 
5020 
5023 
5024  /// @brief Visits all tile values in this node, i.e. both inactive and active tiles
5025  class ValueIterator : public MaskIterT<false>
5026  {
5027  using BaseT = MaskIterT<false>;
5028  const InternalNode* mParent;
5029 
5030  public:
5032  : BaseT()
5033  , mParent(nullptr)
5034  {
5035  }
5037  : BaseT(parent->data()->mChildMask.beginOff())
5038  , mParent(parent)
5039  {
5040  }
5041  ValueIterator& operator=(const ValueIterator&) = default;
5043  {
5044  NANOVDB_ASSERT(*this);
5045  return mParent->data()->getValue(BaseT::pos());
5046  }
5048  {
5049  NANOVDB_ASSERT(*this);
5050  return mParent->offsetToGlobalCoord(BaseT::pos());
5051  }
5052  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
5053  __hostdev__ bool isActive() const
5054  {
5055  NANOVDB_ASSERT(*this);
5056  return mParent->data()->isActive(BaseT::mPos);
5057  }
5058  }; // Member class ValueIterator
5059 
5062 
5063  /// @brief Visits active tile values of this node only
5064  class ValueOnIterator : public MaskIterT<true>
5065  {
5066  using BaseT = MaskIterT<true>;
5067  const InternalNode* mParent;
5068 
5069  public:
5071  : BaseT()
5072  , mParent(nullptr)
5073  {
5074  }
5076  : BaseT(parent->data()->mValueMask.beginOn())
5077  , mParent(parent)
5078  {
5079  }
5080  ValueOnIterator& operator=(const ValueOnIterator&) = default;
5082  {
5083  NANOVDB_ASSERT(*this);
5084  return mParent->data()->getValue(BaseT::pos());
5085  }
5087  {
5088  NANOVDB_ASSERT(*this);
5089  return mParent->offsetToGlobalCoord(BaseT::pos());
5090  }
5091  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
5092  }; // Member class ValueOnIterator
5093 
5096 
5097  /// @brief Visits all tile values and child nodes of this node
5098  class DenseIterator : public Mask<Log2Dim>::DenseIterator
5099  {
5100  using BaseT = typename Mask<Log2Dim>::DenseIterator;
5101  const DataType* mParent;
5102 
5103  public:
5105  : BaseT()
5106  , mParent(nullptr)
5107  {
5108  }
5110  : BaseT(0)
5111  , mParent(parent->data())
5112  {
5113  }
5114  DenseIterator& operator=(const DenseIterator&) = default;
5115  __hostdev__ const ChildT* probeChild(ValueType& value) const
5116  {
5117  NANOVDB_ASSERT(mParent && bool(*this));
5118  const ChildT* child = nullptr;
5119  if (mParent->mChildMask.isOn(BaseT::pos())) {
5120  child = mParent->getChild(BaseT::pos());
5121  } else {
5122  value = mParent->getValue(BaseT::pos());
5123  }
5124  return child;
5125  }
5126  __hostdev__ bool isValueOn() const
5127  {
5128  NANOVDB_ASSERT(mParent && bool(*this));
5129  return mParent->isActive(BaseT::pos());
5130  }
5132  {
5133  NANOVDB_ASSERT(mParent && bool(*this));
5134  return mParent->offsetToGlobalCoord(BaseT::pos());
5135  }
5136  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
5137  }; // Member class DenseIterator
5138 
5140  __hostdev__ DenseIterator cbeginChildAll() const { return DenseIterator(this); } // matches openvdb
5141 
5142  /// @brief This class cannot be constructed or deleted
5143  InternalNode() = delete;
5144  InternalNode(const InternalNode&) = delete;
5145  InternalNode& operator=(const InternalNode&) = delete;
5146  ~InternalNode() = delete;
5147 
5148  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
5149 
5150  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
5151 
5152  /// @brief Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32)
5153  __hostdev__ static uint32_t dim() { return 1u << TOTAL; }
5154 
5155  /// @brief Return memory usage in bytes for the class
5156  __hostdev__ static size_t memUsage() { return DataType::memUsage(); }
5157 
5158  /// @brief Return a const reference to the bit mask of active voxels in this internal node
5161 
5162  /// @brief Return a const reference to the bit mask of child nodes in this internal node
5163  __hostdev__ const MaskType<LOG2DIM>& childMask() const { return DataType::mChildMask; }
5164  __hostdev__ const MaskType<LOG2DIM>& getChildMask() const { return DataType::mChildMask; }
5165 
5166  /// @brief Return the origin in index space of this leaf node
5167  __hostdev__ CoordType origin() const { return DataType::mBBox.min() & ~MASK; }
5168 
5169  /// @brief Return a const reference to the minimum active value encoded in this internal node and any of its child nodes
5170  __hostdev__ const ValueType& minimum() const { return this->getMin(); }
5171 
5172  /// @brief Return a const reference to the maximum active value encoded in this internal node and any of its child nodes
5173  __hostdev__ const ValueType& maximum() const { return this->getMax(); }
5174 
5175  /// @brief Return a const reference to the average of all the active values encoded in this internal node and any of its child nodes
5176  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
5177 
5178  /// @brief Return the variance of all the active values encoded in this internal node and any of its child nodes
5179  __hostdev__ FloatType variance() const { return DataType::mStdDevi * DataType::mStdDevi; }
5180 
5181  /// @brief Return a const reference to the standard deviation of all the active values encoded in this internal node and any of its child nodes
5182  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
5183 
5184  /// @brief Return a const reference to the bounding box in index space of active values in this internal node and any of its child nodes
5185  __hostdev__ const BBox<CoordType>& bbox() const { return DataType::mBBox; }
5186 
5187  /// @brief If the first entry in this node's table is a tile, return the tile's value.
5188  /// Otherwise, return the result of calling getFirstValue() on the child.
5190  {
5191  return DataType::mChildMask.isOn(0) ? this->getChild(0)->getFirstValue() : DataType::getValue(0);
5192  }
5193 
5194  /// @brief If the last entry in this node's table is a tile, return the tile's value.
5195  /// Otherwise, return the result of calling getLastValue() on the child.
5197  {
5198  return DataType::mChildMask.isOn(SIZE - 1) ? this->getChild(SIZE - 1)->getLastValue() : DataType::getValue(SIZE - 1);
5199  }
5200 
5201 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
5202  /// @brief Return the value of the given voxel
5203  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
5204  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
5205  /// @brief return the state and updates the value of the specified voxel
5206  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
5207  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
5208 #else // NANOVDB_NEW_ACCESSOR_METHODS
5209  __hostdev__ ValueType getValue(const CoordType& ijk) const
5210  {
5211  const uint32_t n = CoordToOffset(ijk);
5212  return DataType::mChildMask.isOn(n) ? this->getChild(n)->getValue(ijk) : DataType::getValue(n);
5213  }
5214  __hostdev__ bool isActive(const CoordType& ijk) const
5215  {
5216  const uint32_t n = CoordToOffset(ijk);
5217  return DataType::mChildMask.isOn(n) ? this->getChild(n)->isActive(ijk) : DataType::isActive(n);
5218  }
5219  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
5220  {
5221  const uint32_t n = CoordToOffset(ijk);
5222  if (DataType::mChildMask.isOn(n))
5223  return this->getChild(n)->probeValue(ijk, v);
5224  v = DataType::getValue(n);
5225  return DataType::isActive(n);
5226  }
5227  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const
5228  {
5229  const uint32_t n = CoordToOffset(ijk);
5230  if (DataType::mChildMask.isOn(n))
5231  return this->getChild(n)->probeLeaf(ijk);
5232  return nullptr;
5233  }
5234 
5235 #endif // NANOVDB_NEW_ACCESSOR_METHODS
5236 
5238  {
5239  const uint32_t n = CoordToOffset(ijk);
5240  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
5241  }
5243  {
5244  const uint32_t n = CoordToOffset(ijk);
5245  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
5246  }
5247 
5248  /// @brief Return the linear offset corresponding to the given coordinate
5249  __hostdev__ static uint32_t CoordToOffset(const CoordType& ijk)
5250  {
5251  return (((ijk[0] & MASK) >> ChildT::TOTAL) << (2 * LOG2DIM)) | // note, we're using bitwise OR instead of +
5252  (((ijk[1] & MASK) >> ChildT::TOTAL) << (LOG2DIM)) |
5253  ((ijk[2] & MASK) >> ChildT::TOTAL);
5254  }
5255 
5256  /// @return the local coordinate of the n'th tile or child node
5258  {
5259  NANOVDB_ASSERT(n < SIZE);
5260  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
5261  return Coord(n >> 2 * LOG2DIM, m >> LOG2DIM, m & ((1 << LOG2DIM) - 1));
5262  }
5263 
5264  /// @brief modifies local coordinates to global coordinates of a tile or child node
5266  {
5267  ijk <<= ChildT::TOTAL;
5268  ijk += this->origin();
5269  }
5270 
5272  {
5274  this->localToGlobalCoord(ijk);
5275  return ijk;
5276  }
5277 
5278  /// @brief Return true if this node or any of its child nodes contain active values
5279  __hostdev__ bool isActive() const { return DataType::mFlags & uint32_t(2); }
5280 
5281  template<typename OpT, typename... ArgsT>
5282  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
5283  {
5284  const uint32_t n = CoordToOffset(ijk);
5285  if (this->isChild(n))
5286  return this->getChild(n)->template get<OpT>(ijk, args...);
5287  return OpT::get(*this, n, args...);
5288  }
5289 
5290  template<typename OpT, typename... ArgsT>
5291  __hostdev__ auto // occasionally fails with NVCC
5292 // __hostdev__ decltype(OpT::set(std::declval<InternalNode&>(), std::declval<uint32_t>(), std::declval<ArgsT>()...))
5293  set(const CoordType& ijk, ArgsT&&... args)
5294  {
5295  const uint32_t n = CoordToOffset(ijk);
5296  if (this->isChild(n))
5297  return this->getChild(n)->template set<OpT>(ijk, args...);
5298  return OpT::set(*this, n, args...);
5299  }
5300 
5301 private:
5302  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(InternalData) is misaligned");
5303 
5304  template<typename, int, int, int>
5305  friend class ReadAccessor;
5306 
5307  template<typename>
5308  friend class RootNode;
5309  template<typename, uint32_t>
5310  friend class InternalNode;
5311 
5312 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
5313  /// @brief Private read access method used by the ReadAccessor
5314  template<typename AccT>
5315  __hostdev__ ValueType getValueAndCache(const CoordType& ijk, const AccT& acc) const
5316  {
5317  const uint32_t n = CoordToOffset(ijk);
5318  if (DataType::mChildMask.isOff(n))
5319  return DataType::getValue(n);
5320  const ChildT* child = this->getChild(n);
5321  acc.insert(ijk, child);
5322  return child->getValueAndCache(ijk, acc);
5323  }
5324  template<typename AccT>
5325  __hostdev__ bool isActiveAndCache(const CoordType& ijk, const AccT& acc) const
5326  {
5327  const uint32_t n = CoordToOffset(ijk);
5328  if (DataType::mChildMask.isOff(n))
5329  return DataType::isActive(n);
5330  const ChildT* child = this->getChild(n);
5331  acc.insert(ijk, child);
5332  return child->isActiveAndCache(ijk, acc);
5333  }
5334  template<typename AccT>
5335  __hostdev__ bool probeValueAndCache(const CoordType& ijk, ValueType& v, const AccT& acc) const
5336  {
5337  const uint32_t n = CoordToOffset(ijk);
5338  if (DataType::mChildMask.isOff(n)) {
5339  v = DataType::getValue(n);
5340  return DataType::isActive(n);
5341  }
5342  const ChildT* child = this->getChild(n);
5343  acc.insert(ijk, child);
5344  return child->probeValueAndCache(ijk, v, acc);
5345  }
5346  template<typename AccT>
5347  __hostdev__ const LeafNodeType* probeLeafAndCache(const CoordType& ijk, const AccT& acc) const
5348  {
5349  const uint32_t n = CoordToOffset(ijk);
5350  if (DataType::mChildMask.isOff(n))
5351  return nullptr;
5352  const ChildT* child = this->getChild(n);
5353  acc.insert(ijk, child);
5354  return child->probeLeafAndCache(ijk, acc);
5355  }
5356  template<typename AccT>
5357  __hostdev__ typename AccT::NodeInfo getNodeInfoAndCache(const CoordType& ijk, const AccT& acc) const
5358  {
5359  using NodeInfoT = typename AccT::NodeInfo;
5360  const uint32_t n = CoordToOffset(ijk);
5361  if (DataType::mChildMask.isOff(n)) {
5362  return NodeInfoT{LEVEL, this->dim(), this->minimum(), this->maximum(), this->average(), this->stdDeviation(), this->bbox()[0], this->bbox()[1]};
5363  }
5364  const ChildT* child = this->getChild(n);
5365  acc.insert(ijk, child);
5366  return child->getNodeInfoAndCache(ijk, acc);
5367  }
5368 #endif // NANOVDB_NEW_ACCESSOR_METHODS
5369 
5370  template<typename RayT, typename AccT>
5371  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
5372  {
5373  if (DataType::mFlags & uint32_t(1u))
5374  return this->dim(); // skip this node if the 1st bit is set
5375  //if (!ray.intersects( this->bbox() )) return 1<<TOTAL;
5376 
5377  const uint32_t n = CoordToOffset(ijk);
5378  if (DataType::mChildMask.isOn(n)) {
5379  const ChildT* child = this->getChild(n);
5380  acc.insert(ijk, child);
5381  return child->getDimAndCache(ijk, ray, acc);
5382  }
5383  return ChildNodeType::dim(); // tile value
5384  }
5385 
5386  template<typename OpT, typename AccT, typename... ArgsT>
5387  __hostdev__ auto
5388  //__hostdev__ decltype(OpT::get(std::declval<const InternalNode&>(), std::declval<uint32_t>(), std::declval<ArgsT>()...))
5389  getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
5390  {
5391  const uint32_t n = CoordToOffset(ijk);
5392  if (DataType::mChildMask.isOff(n))
5393  return OpT::get(*this, n, args...);
5394  const ChildT* child = this->getChild(n);
5395  acc.insert(ijk, child);
5396  return child->template getAndCache<OpT>(ijk, acc, args...);
5397  }
5398 
5399  template<typename OpT, typename AccT, typename... ArgsT>
5400  __hostdev__ auto // occasionally fails with NVCC
5401 // __hostdev__ decltype(OpT::set(std::declval<InternalNode&>(), std::declval<uint32_t>(), std::declval<ArgsT>()...))
5402  setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
5403  {
5404  const uint32_t n = CoordToOffset(ijk);
5405  if (DataType::mChildMask.isOff(n))
5406  return OpT::set(*this, n, args...);
5407  ChildT* child = this->getChild(n);
5408  acc.insert(ijk, child);
5409  return child->template setAndCache<OpT>(ijk, acc, args...);
5410  }
5411 
5412 }; // InternalNode class
5413 
5414 // --------------------------> LeafData<T> <------------------------------------
5415 
5416 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
5417 ///
5418 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
5419 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5420 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData
5421 {
5422  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
5423  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
5424  using ValueType = ValueT;
5425  using BuildType = ValueT;
5426  using FloatType = typename FloatTraits<ValueT>::FloatType;
5427  using ArrayType = ValueT; // type used for the internal mValue array
5428  static constexpr bool FIXED_SIZE = true;
5429 
5430  CoordT mBBoxMin; // 12B.
5431  uint8_t mBBoxDif[3]; // 3B.
5432  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
5433  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
5434 
5435  ValueType mMinimum; // typically 4B
5436  ValueType mMaximum; // typically 4B
5437  FloatType mAverage; // typically 4B, average of all the active values in this node and its child nodes
5438  FloatType mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
5439  alignas(32) ValueType mValues[1u << 3 * LOG2DIM];
5440 
5441  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
5442  ///
5443  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
5444  __hostdev__ static constexpr uint32_t padding()
5445  {
5446  return sizeof(LeafData) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * (sizeof(ValueT) + sizeof(FloatType)) + (1u << (3 * LOG2DIM)) * sizeof(ValueT));
5447  }
5448  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
5449 
5450  __hostdev__ static bool hasStats() { return true; }
5451 
5452  __hostdev__ ValueType getValue(uint32_t i) const { return mValues[i]; }
5453  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& value) { mValues[offset] = value; }
5454  __hostdev__ void setValue(uint32_t offset, const ValueType& value)
5455  {
5456  mValueMask.setOn(offset);
5457  mValues[offset] = value;
5458  }
5459  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
5460 
5461  __hostdev__ ValueType getMin() const { return mMinimum; }
5462  __hostdev__ ValueType getMax() const { return mMaximum; }
5463  __hostdev__ FloatType getAvg() const { return mAverage; }
5464  __hostdev__ FloatType getDev() const { return mStdDevi; }
5465 
5466  __hostdev__ void setMin(const ValueType& v) { mMinimum = v; }
5467  __hostdev__ void setMax(const ValueType& v) { mMaximum = v; }
5468  __hostdev__ void setAvg(const FloatType& v) { mAverage = v; }
5469  __hostdev__ void setDev(const FloatType& v) { mStdDevi = v; }
5470 
5471  template<typename T>
5472  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
5473 
5474  __hostdev__ void fill(const ValueType& v)
5475  {
5476  for (auto *p = mValues, *q = p + 512; p != q; ++p)
5477  *p = v;
5478  }
5479 
5480  /// @brief This class cannot be constructed or deleted
5481  LeafData() = delete;
5482  LeafData(const LeafData&) = delete;
5483  LeafData& operator=(const LeafData&) = delete;
5484  ~LeafData() = delete;
5485 }; // LeafData<ValueT>
5486 
5487 // --------------------------> LeafFnBase <------------------------------------
5488 
5489 /// @brief Base-class for quantized float leaf nodes
5490 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5491 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafFnBase
5492 {
5493  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
5494  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
5495  using ValueType = float;
5496  using FloatType = float;
5497 
5498  CoordT mBBoxMin; // 12B.
5499  uint8_t mBBoxDif[3]; // 3B.
5500  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
5501  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
5502 
5503  float mMinimum; // 4B - minimum of ALL values in this node
5504  float mQuantum; // = (max - min)/15 4B
5505  uint16_t mMin, mMax, mAvg, mDev; // quantized representations of statistics of active values
5506  // no padding since it's always 32B aligned
5507  __hostdev__ static uint64_t memUsage() { return sizeof(LeafFnBase); }
5508 
5509  __hostdev__ static bool hasStats() { return true; }
5510 
5511  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
5512  ///
5513  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
5514  __hostdev__ static constexpr uint32_t padding()
5515  {
5516  return sizeof(LeafFnBase) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * 4 + 4 * 2);
5517  }
5518  __hostdev__ void init(float min, float max, uint8_t bitWidth)
5519  {
5520  mMinimum = min;
5521  mQuantum = (max - min) / float((1 << bitWidth) - 1);
5522  }
5523 
5524  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
5525 
5526  /// @brief return the quantized minimum of the active values in this node
5527  __hostdev__ float getMin() const { return mMin * mQuantum + mMinimum; }
5528 
5529  /// @brief return the quantized maximum of the active values in this node
5530  __hostdev__ float getMax() const { return mMax * mQuantum + mMinimum; }
5531 
5532  /// @brief return the quantized average of the active values in this node
5533  __hostdev__ float getAvg() const { return mAvg * mQuantum + mMinimum; }
5534  /// @brief return the quantized standard deviation of the active values in this node
5535 
5536  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
5537  __hostdev__ float getDev() const { return mDev * mQuantum; }
5538 
5539  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
5540  __hostdev__ void setMin(float min) { mMin = uint16_t((min - mMinimum) / mQuantum + 0.5f); }
5541 
5542  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
5543  __hostdev__ void setMax(float max) { mMax = uint16_t((max - mMinimum) / mQuantum + 0.5f); }
5544 
5545  /// @note min <= avg <= max or 0 <= (avg-min)/(min-max) <= 1
5546  __hostdev__ void setAvg(float avg) { mAvg = uint16_t((avg - mMinimum) / mQuantum + 0.5f); }
5547 
5548  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
5549  __hostdev__ void setDev(float dev) { mDev = uint16_t(dev / mQuantum + 0.5f); }
5550 
5551  template<typename T>
5552  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
5553 }; // LeafFnBase
5554 
5555 // --------------------------> LeafData<Fp4> <------------------------------------
5556 
5557 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
5558 ///
5559 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
5560 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5561 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp4, CoordT, MaskT, LOG2DIM>
5562  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
5563 {
5564  using BaseT = LeafFnBase<CoordT, MaskT, LOG2DIM>;
5565  using BuildType = Fp4;
5566  using ArrayType = uint8_t; // type used for the internal mValue array
5567  static constexpr bool FIXED_SIZE = true;
5568  alignas(32) uint8_t mCode[1u << (3 * LOG2DIM - 1)]; // LeafFnBase is 32B aligned and so is mCode
5569 
5570  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
5571  __hostdev__ static constexpr uint32_t padding()
5572  {
5573  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
5574  return sizeof(LeafData) - sizeof(BaseT) - (1u << (3 * LOG2DIM - 1));
5575  }
5576 
5577  __hostdev__ static constexpr uint8_t bitWidth() { return 4u; }
5578  __hostdev__ float getValue(uint32_t i) const
5579  {
5580 #if 0
5581  const uint8_t c = mCode[i>>1];
5582  return ( (i&1) ? c >> 4 : c & uint8_t(15) )*BaseT::mQuantum + BaseT::mMinimum;
5583 #else
5584  return ((mCode[i >> 1] >> ((i & 1) << 2)) & uint8_t(15)) * BaseT::mQuantum + BaseT::mMinimum;
5585 #endif
5586  }
5587 
5588  /// @brief This class cannot be constructed or deleted
5589  LeafData() = delete;
5590  LeafData(const LeafData&) = delete;
5591  LeafData& operator=(const LeafData&) = delete;
5592  ~LeafData() = delete;
5593 }; // LeafData<Fp4>
5594 
5595 // --------------------------> LeafBase<Fp8> <------------------------------------
5596 
5597 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5598 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp8, CoordT, MaskT, LOG2DIM>
5599  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
5600 {
5601  using BaseT = LeafFnBase<CoordT, MaskT, LOG2DIM>;
5602  using BuildType = Fp8;
5603  using ArrayType = uint8_t; // type used for the internal mValue array
5604  static constexpr bool FIXED_SIZE = true;
5605  alignas(32) uint8_t mCode[1u << 3 * LOG2DIM];
5606  __hostdev__ static constexpr int64_t memUsage() { return sizeof(LeafData); }
5607  __hostdev__ static constexpr uint32_t padding()
5608  {
5609  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
5610  return sizeof(LeafData) - sizeof(BaseT) - (1u << 3 * LOG2DIM);
5611  }
5612 
5613  __hostdev__ static constexpr uint8_t bitWidth() { return 8u; }
5614  __hostdev__ float getValue(uint32_t i) const
5615  {
5616  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/255 + min
5617  }
5618  /// @brief This class cannot be constructed or deleted
5619  LeafData() = delete;
5620  LeafData(const LeafData&) = delete;
5621  LeafData& operator=(const LeafData&) = delete;
5622  ~LeafData() = delete;
5623 }; // LeafData<Fp8>
5624 
5625 // --------------------------> LeafData<Fp16> <------------------------------------
5626 
5627 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5628 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp16, CoordT, MaskT, LOG2DIM>
5629  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
5630 {
5631  using BaseT = LeafFnBase<CoordT, MaskT, LOG2DIM>;
5632  using BuildType = Fp16;
5633  using ArrayType = uint16_t; // type used for the internal mValue array
5634  static constexpr bool FIXED_SIZE = true;
5635  alignas(32) uint16_t mCode[1u << 3 * LOG2DIM];
5636 
5637  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
5638  __hostdev__ static constexpr uint32_t padding()
5639  {
5640  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
5641  return sizeof(LeafData) - sizeof(BaseT) - 2 * (1u << 3 * LOG2DIM);
5642  }
5643 
5644  __hostdev__ static constexpr uint8_t bitWidth() { return 16u; }
5645  __hostdev__ float getValue(uint32_t i) const
5646  {
5647  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/65535 + min
5648  }
5649 
5650  /// @brief This class cannot be constructed or deleted
5651  LeafData() = delete;
5652  LeafData(const LeafData&) = delete;
5653  LeafData& operator=(const LeafData&) = delete;
5654  ~LeafData() = delete;
5655 }; // LeafData<Fp16>
5656 
5657 // --------------------------> LeafData<FpN> <------------------------------------
5658 
5659 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5660 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<FpN, CoordT, MaskT, LOG2DIM>
5661  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
5662 { // this class has no additional data members, however every instance is immediately followed by
5663  // bitWidth*64 bytes. Since its base class is 32B aligned so are the bitWidth*64 bytes
5664  using BaseT = LeafFnBase<CoordT, MaskT, LOG2DIM>;
5665  using BuildType = FpN;
5666  static constexpr bool FIXED_SIZE = false;
5667  __hostdev__ static constexpr uint32_t padding()
5668  {
5669  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
5670  return 0;
5671  }
5672 
5673  __hostdev__ uint8_t bitWidth() const { return 1 << (BaseT::mFlags >> 5); } // 4,8,16,32 = 2^(2,3,4,5)
5674  __hostdev__ size_t memUsage() const { return sizeof(*this) + this->bitWidth() * 64; }
5675  __hostdev__ static size_t memUsage(uint32_t bitWidth) { return 96u + bitWidth * 64; }
5676  __hostdev__ float getValue(uint32_t i) const
5677  {
5678 #ifdef NANOVDB_FPN_BRANCHLESS // faster
5679  const int b = BaseT::mFlags >> 5; // b = 0, 1, 2, 3, 4 corresponding to 1, 2, 4, 8, 16 bits
5680 #if 0 // use LUT
5681  uint16_t code = reinterpret_cast<const uint16_t*>(this + 1)[i >> (4 - b)];
5682  const static uint8_t shift[5] = {15, 7, 3, 1, 0};
5683  const static uint16_t mask[5] = {1, 3, 15, 255, 65535};
5684  code >>= (i & shift[b]) << b;
5685  code &= mask[b];
5686 #else // no LUT
5687  uint32_t code = reinterpret_cast<const uint32_t*>(this + 1)[i >> (5 - b)];
5688  code >>= (i & ((32 >> b) - 1)) << b;
5689  code &= (1 << (1 << b)) - 1;
5690 #endif
5691 #else // use branched version (slow)
5692  float code;
5693  auto* values = reinterpret_cast<const uint8_t*>(this + 1);
5694  switch (BaseT::mFlags >> 5) {
5695  case 0u: // 1 bit float
5696  code = float((values[i >> 3] >> (i & 7)) & uint8_t(1));
5697  break;
5698  case 1u: // 2 bits float
5699  code = float((values[i >> 2] >> ((i & 3) << 1)) & uint8_t(3));
5700  break;
5701  case 2u: // 4 bits float
5702  code = float((values[i >> 1] >> ((i & 1) << 2)) & uint8_t(15));
5703  break;
5704  case 3u: // 8 bits float
5705  code = float(values[i]);
5706  break;
5707  default: // 16 bits float
5708  code = float(reinterpret_cast<const uint16_t*>(values)[i]);
5709  }
5710 #endif
5711  return float(code) * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/UNITS + min
5712  }
5713 
5714  /// @brief This class cannot be constructed or deleted
5715  LeafData() = delete;
5716  LeafData(const LeafData&) = delete;
5717  LeafData& operator=(const LeafData&) = delete;
5718  ~LeafData() = delete;
5719 }; // LeafData<FpN>
5720 
5721 // --------------------------> LeafData<bool> <------------------------------------
5722 
5723 // Partial template specialization of LeafData with bool
5724 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5725 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<bool, CoordT, MaskT, LOG2DIM>
5726 {
5727  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
5728  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
5729  using ValueType = bool;
5730  using BuildType = bool;
5731  using FloatType = bool; // dummy value type
5732  using ArrayType = MaskT<LOG2DIM>; // type used for the internal mValue array
5733  static constexpr bool FIXED_SIZE = true;
5734 
5735  CoordT mBBoxMin; // 12B.
5736  uint8_t mBBoxDif[3]; // 3B.
5737  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
5738  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
5739  MaskT<LOG2DIM> mValues; // LOG2DIM(3): 64B.
5740  uint64_t mPadding[2]; // 16B padding to 32B alignment
5741 
5742  __hostdev__ static constexpr uint32_t padding() { return sizeof(LeafData) - 12u - 3u - 1u - 2 * sizeof(MaskT<LOG2DIM>) - 16u; }
5743  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
5744  __hostdev__ static bool hasStats() { return false; }
5745  __hostdev__ bool getValue(uint32_t i) const { return mValues.isOn(i); }
5746  __hostdev__ bool getMin() const { return false; } // dummy
5747  __hostdev__ bool getMax() const { return false; } // dummy
5748  __hostdev__ bool getAvg() const { return false; } // dummy
5749  __hostdev__ bool getDev() const { return false; } // dummy
5750  __hostdev__ void setValue(uint32_t offset, bool v)
5751  {
5752  mValueMask.setOn(offset);
5753  mValues.set(offset, v);
5754  }
5755  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
5756  __hostdev__ void setMin(const bool&) {} // no-op
5757  __hostdev__ void setMax(const bool&) {} // no-op
5758  __hostdev__ void setAvg(const bool&) {} // no-op
5759  __hostdev__ void setDev(const bool&) {} // no-op
5760 
5761  template<typename T>
5762  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
5763 
5764  /// @brief This class cannot be constructed or deleted
5765  LeafData() = delete;
5766  LeafData(const LeafData&) = delete;
5767  LeafData& operator=(const LeafData&) = delete;
5768  ~LeafData() = delete;
5769 }; // LeafData<bool>
5770 
5771 // --------------------------> LeafData<ValueMask> <------------------------------------
5772 
5773 // Partial template specialization of LeafData with ValueMask
5774 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5775 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueMask, CoordT, MaskT, LOG2DIM>
5776 {
5777  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
5778  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
5779  using ValueType = bool;
5780  using BuildType = ValueMask;
5781  using FloatType = bool; // dummy value type
5782  using ArrayType = void; // type used for the internal mValue array - void means missing
5783  static constexpr bool FIXED_SIZE = true;
5784 
5785  CoordT mBBoxMin; // 12B.
5786  uint8_t mBBoxDif[3]; // 3B.
5787  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
5788  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
5789  uint64_t mPadding[2]; // 16B padding to 32B alignment
5790 
5791  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
5792  __hostdev__ static bool hasStats() { return false; }
5793  __hostdev__ static constexpr uint32_t padding()
5794  {
5795  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
5796  }
5797 
5798  __hostdev__ bool getValue(uint32_t i) const { return mValueMask.isOn(i); }
5799  __hostdev__ bool getMin() const { return false; } // dummy
5800  __hostdev__ bool getMax() const { return false; } // dummy
5801  __hostdev__ bool getAvg() const { return false; } // dummy
5802  __hostdev__ bool getDev() const { return false; } // dummy
5803  __hostdev__ void setValue(uint32_t offset, bool) { mValueMask.setOn(offset); }
5804  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
5805  __hostdev__ void setMin(const ValueType&) {} // no-op
5806  __hostdev__ void setMax(const ValueType&) {} // no-op
5807  __hostdev__ void setAvg(const FloatType&) {} // no-op
5808  __hostdev__ void setDev(const FloatType&) {} // no-op
5809 
5810  template<typename T>
5811  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
5812 
5813  /// @brief This class cannot be constructed or deleted
5814  LeafData() = delete;
5815  LeafData(const LeafData&) = delete;
5816  LeafData& operator=(const LeafData&) = delete;
5817  ~LeafData() = delete;
5818 }; // LeafData<ValueMask>
5819 
5820 // --------------------------> LeafIndexBase <------------------------------------
5821 
5822 // Partial template specialization of LeafData with ValueIndex
5823 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5824 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafIndexBase
5825 {
5826  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
5827  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
5828  using ValueType = uint64_t;
5829  using FloatType = uint64_t;
5830  using ArrayType = void; // type used for the internal mValue array - void means missing
5831  static constexpr bool FIXED_SIZE = true;
5832 
5833  CoordT mBBoxMin; // 12B.
5834  uint8_t mBBoxDif[3]; // 3B.
5835  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
5836  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
5837  uint64_t mOffset, mPrefixSum; // 8B offset to first value in this leaf node and 9-bit prefix sum
5838  __hostdev__ static constexpr uint32_t padding()
5839  {
5840  return sizeof(LeafIndexBase) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
5841  }
5842  __hostdev__ static uint64_t memUsage() { return sizeof(LeafIndexBase); }
5843  __hostdev__ bool hasStats() const { return mFlags & (uint8_t(1) << 4); }
5844  // return the offset to the first value indexed by this leaf node
5845  __hostdev__ const uint64_t& firstOffset() const { return mOffset; }
5846  __hostdev__ void setMin(const ValueType&) {} // no-op
5847  __hostdev__ void setMax(const ValueType&) {} // no-op
5848  __hostdev__ void setAvg(const FloatType&) {} // no-op
5849  __hostdev__ void setDev(const FloatType&) {} // no-op
5850  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
5851  template<typename T>
5852  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
5853 }; // LeafIndexBase
5854 
5855 // --------------------------> LeafData<ValueIndex> <------------------------------------
5856 
5857 // Partial template specialization of LeafData with ValueIndex
5858 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5859 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
5860  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
5861 {
5862  using BaseT = LeafIndexBase<CoordT, MaskT, LOG2DIM>;
5863  using BuildType = ValueIndex;
5864  // return the total number of values indexed by this leaf node, excluding the optional 4 stats
5865  __hostdev__ static uint32_t valueCount() { return uint32_t(512); } // 8^3 = 2^9
5866  // return the offset to the last value indexed by this leaf node (disregarding optional stats)
5867  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + 511u; } // 2^9 - 1
5868  // if stats are available, they are always placed after the last voxel value in this leaf node
5869  __hostdev__ uint64_t getMin() const { return this->hasStats() ? BaseT::mOffset + 512u : 0u; }
5870  __hostdev__ uint64_t getMax() const { return this->hasStats() ? BaseT::mOffset + 513u : 0u; }
5871  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? BaseT::mOffset + 514u : 0u; }
5872  __hostdev__ uint64_t getDev() const { return this->hasStats() ? BaseT::mOffset + 515u : 0u; }
5873  __hostdev__ uint64_t getValue(uint32_t i) const { return BaseT::mOffset + i; } // dense leaf node with active and inactive voxels
5874 
5875  /// @brief This class cannot be constructed or deleted
5876  LeafData() = delete;
5877  LeafData(const LeafData&) = delete;
5878  LeafData& operator=(const LeafData&) = delete;
5879  ~LeafData() = delete;
5880 }; // LeafData<ValueIndex>
5881 
5882 // --------------------------> LeafData<ValueOnIndex> <------------------------------------
5883 
5884 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5885 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
5886  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
5887 {
5888  using BaseT = LeafIndexBase<CoordT, MaskT, LOG2DIM>;
5889  using BuildType = ValueOnIndex;
5890  __hostdev__ uint32_t valueCount() const
5891  {
5892  return CountOn(BaseT::mValueMask.words()[7]) + (BaseT::mPrefixSum >> 54u & 511u); // last 9 bits of mPrefixSum do not account for the last word in mValueMask
5893  }
5894  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + this->valueCount() - 1u; }
5895  __hostdev__ uint64_t getMin() const { return this->hasStats() ? this->lastOffset() + 1u : 0u; }
5896  __hostdev__ uint64_t getMax() const { return this->hasStats() ? this->lastOffset() + 2u : 0u; }
5897  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? this->lastOffset() + 3u : 0u; }
5898  __hostdev__ uint64_t getDev() const { return this->hasStats() ? this->lastOffset() + 4u : 0u; }
5899  __hostdev__ uint64_t getValue(uint32_t i) const
5900  {
5901  //return mValueMask.isOn(i) ? mOffset + mValueMask.countOn(i) : 0u;// for debugging
5902  uint32_t n = i >> 6;
5903  const uint64_t w = BaseT::mValueMask.words()[n], mask = uint64_t(1) << (i & 63u);
5904  if (!(w & mask)) return uint64_t(0); // if i'th value is inactive return offset to background value
5905  uint64_t sum = BaseT::mOffset + CountOn(w & (mask - 1u));
5906  if (n--) sum += BaseT::mPrefixSum >> (9u * n) & 511u;
5907  return sum;
5908  }
5909 
5910  /// @brief This class cannot be constructed or deleted
5911  LeafData() = delete;
5912  LeafData(const LeafData&) = delete;
5913  LeafData& operator=(const LeafData&) = delete;
5914  ~LeafData() = delete;
5915 }; // LeafData<ValueOnIndex>
5916 
5917 // --------------------------> LeafData<ValueIndexMask> <------------------------------------
5918 
5919 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5920 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndexMask, CoordT, MaskT, LOG2DIM>
5921  : public LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
5922 {
5923  using BuildType = ValueIndexMask;
5924  MaskT<LOG2DIM> mMask;
5925  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
5926  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
5927  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
5928 }; // LeafData<ValueIndexMask>
5929 
5930 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5931 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndexMask, CoordT, MaskT, LOG2DIM>
5932  : public LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
5933 {
5934  using BuildType = ValueOnIndexMask;
5935  MaskT<LOG2DIM> mMask;
5936  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
5937  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
5938  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
5939 }; // LeafData<ValueOnIndexMask>
5940 
5941 // --------------------------> LeafData<Point> <------------------------------------
5942 
5943 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
5944 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Point, CoordT, MaskT, LOG2DIM>
5945 {
5946  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
5947  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
5948  using ValueType = uint64_t;
5949  using BuildType = Point;
5951  using ArrayType = uint16_t; // type used for the internal mValue array
5952  static constexpr bool FIXED_SIZE = true;
5953 
5954  CoordT mBBoxMin; // 12B.
5955  uint8_t mBBoxDif[3]; // 3B.
5956  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
5957  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
5958 
5959  uint64_t mOffset; // 8B
5960  uint64_t mPointCount; // 8B
5961  alignas(32) uint16_t mValues[1u << 3 * LOG2DIM]; // 1KB
5962  // no padding
5963 
5964  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
5965  ///
5966  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
5967  __hostdev__ static constexpr uint32_t padding()
5968  {
5969  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u + (1u << 3 * LOG2DIM) * 2u);
5970  }
5971  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
5972 
5973  __hostdev__ uint64_t offset() const { return mOffset; }
5974  __hostdev__ uint64_t pointCount() const { return mPointCount; }
5975  __hostdev__ uint64_t first(uint32_t i) const { return i ? uint64_t(mValues[i - 1u]) + mOffset : mOffset; }
5976  __hostdev__ uint64_t last(uint32_t i) const { return uint64_t(mValues[i]) + mOffset; }
5977  __hostdev__ uint64_t getValue(uint32_t i) const { return uint64_t(mValues[i]); }
5978  __hostdev__ void setValueOnly(uint32_t offset, uint16_t value) { mValues[offset] = value; }
5979  __hostdev__ void setValue(uint32_t offset, uint16_t value)
5980  {
5981  mValueMask.setOn(offset);
5982  mValues[offset] = value;
5983  }
5984  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
5985 
5986  __hostdev__ ValueType getMin() const { return mOffset; }
5987  __hostdev__ ValueType getMax() const { return mPointCount; }
5988  __hostdev__ FloatType getAvg() const { return 0.0f; }
5989  __hostdev__ FloatType getDev() const { return 0.0f; }
5990 
5991  __hostdev__ void setMin(const ValueType&) {}
5992  __hostdev__ void setMax(const ValueType&) {}
5993  __hostdev__ void setAvg(const FloatType&) {}
5994  __hostdev__ void setDev(const FloatType&) {}
5995 
5996  template<typename T>
5997  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
5998 
5999  /// @brief This class cannot be constructed or deleted
6000  LeafData() = delete;
6001  LeafData(const LeafData&) = delete;
6002  LeafData& operator=(const LeafData&) = delete;
6003  ~LeafData() = delete;
6004 }; // LeafData<Point>
6005 
6006 // --------------------------> LeafNode<T> <------------------------------------
6007 
6008 /// @brief Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
6009 template<typename BuildT,
6010  typename CoordT = Coord,
6011  template<uint32_t> class MaskT = Mask,
6012  uint32_t Log2Dim = 3>
6013 class LeafNode : public LeafData<BuildT, CoordT, MaskT, Log2Dim>
6014 {
6015 public:
6017  {
6018  static constexpr uint32_t TOTAL = 0;
6019  static constexpr uint32_t DIM = 1;
6020  __hostdev__ static uint32_t dim() { return 1u; }
6021  }; // Voxel
6023  using DataType = LeafData<BuildT, CoordT, MaskT, Log2Dim>;
6024  using ValueType = typename DataType::ValueType;
6025  using FloatType = typename DataType::FloatType;
6026  using BuildType = typename DataType::BuildType;
6027  using CoordType = CoordT;
6028  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
6029  template<uint32_t LOG2>
6030  using MaskType = MaskT<LOG2>;
6031  template<bool ON>
6032  using MaskIterT = typename Mask<Log2Dim>::template Iterator<ON>;
6033 
6034  /// @brief Visits all active values in a leaf node
6035  class ValueOnIterator : public MaskIterT<true>
6036  {
6037  using BaseT = MaskIterT<true>;
6038  const LeafNode* mParent;
6039 
6040  public:
6042  : BaseT()
6043  , mParent(nullptr)
6044  {
6045  }
6047  : BaseT(parent->data()->mValueMask.beginOn())
6048  , mParent(parent)
6049  {
6050  }
6051  ValueOnIterator& operator=(const ValueOnIterator&) = default;
6053  {
6054  NANOVDB_ASSERT(*this);
6055  return mParent->getValue(BaseT::pos());
6056  }
6057  __hostdev__ CoordT getCoord() const
6058  {
6059  NANOVDB_ASSERT(*this);
6060  return mParent->offsetToGlobalCoord(BaseT::pos());
6061  }
6062  }; // Member class ValueOnIterator
6063 
6064  __hostdev__ ValueOnIterator beginValueOn() const { return ValueOnIterator(this); }
6065  __hostdev__ ValueOnIterator cbeginValueOn() const { return ValueOnIterator(this); }
6066 
6067  /// @brief Visits all inactive values in a leaf node
6068  class ValueOffIterator : public MaskIterT<false>
6069  {
6070  using BaseT = MaskIterT<false>;
6071  const LeafNode* mParent;
6072 
6073  public:
6075  : BaseT()
6076  , mParent(nullptr)
6077  {
6078  }
6080  : BaseT(parent->data()->mValueMask.beginOff())
6081  , mParent(parent)
6082  {
6083  }
6084  ValueOffIterator& operator=(const ValueOffIterator&) = default;
6086  {
6087  NANOVDB_ASSERT(*this);
6088  return mParent->getValue(BaseT::pos());
6089  }
6090  __hostdev__ CoordT getCoord() const
6091  {
6092  NANOVDB_ASSERT(*this);
6093  return mParent->offsetToGlobalCoord(BaseT::pos());
6094  }
6095  }; // Member class ValueOffIterator
6096 
6097  __hostdev__ ValueOffIterator beginValueOff() const { return ValueOffIterator(this); }
6098  __hostdev__ ValueOffIterator cbeginValueOff() const { return ValueOffIterator(this); }
6099 
6100  /// @brief Visits all values in a leaf node, i.e. both active and inactive values
6102  {
6103  const LeafNode* mParent;
6104  uint32_t mPos;
6105 
6106  public:
6108  : mParent(nullptr)
6109  , mPos(1u << 3 * Log2Dim)
6110  {
6111  }
6113  : mParent(parent)
6114  , mPos(0)
6115  {
6116  NANOVDB_ASSERT(parent);
6117  }
6118  ValueIterator& operator=(const ValueIterator&) = default;
6120  {
6121  NANOVDB_ASSERT(*this);
6122  return mParent->getValue(mPos);
6123  }
6124  __hostdev__ CoordT getCoord() const
6125  {
6126  NANOVDB_ASSERT(*this);
6127  return mParent->offsetToGlobalCoord(mPos);
6128  }
6129  __hostdev__ bool isActive() const
6130  {
6131  NANOVDB_ASSERT(*this);
6132  return mParent->isActive(mPos);
6133  }
6134  __hostdev__ operator bool() const { return mPos < (1u << 3 * Log2Dim); }
6136  {
6137  ++mPos;
6138  return *this;
6139  }
6141  {
6142  auto tmp = *this;
6143  ++(*this);
6144  return tmp;
6145  }
6146  }; // Member class ValueIterator
6147 
6148  __hostdev__ ValueIterator beginValue() const { return ValueIterator(this); }
6149  __hostdev__ ValueIterator cbeginValueAll() const { return ValueIterator(this); }
6150 
6151  static_assert(is_same<ValueType, typename BuildToValueMap<BuildType>::Type>::value, "Mismatching BuildType");
6152  static constexpr uint32_t LOG2DIM = Log2Dim;
6153  static constexpr uint32_t TOTAL = LOG2DIM; // needed by parent nodes
6154  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
6155  static constexpr uint32_t SIZE = 1u << 3 * LOG2DIM; // total number of voxels represented by this node
6156  static constexpr uint32_t MASK = (1u << LOG2DIM) - 1u; // mask for bit operations
6157  static constexpr uint32_t LEVEL = 0; // level 0 = leaf
6158  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
6159 
6160  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
6161 
6162  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
6163 
6164  /// @brief Return a const reference to the bit mask of active voxels in this leaf node
6167 
6168  /// @brief Return a const reference to the minimum active value encoded in this leaf node
6170 
6171  /// @brief Return a const reference to the maximum active value encoded in this leaf node
6173 
6174  /// @brief Return a const reference to the average of all the active values encoded in this leaf node
6176 
6177  /// @brief Return the variance of all the active values encoded in this leaf node
6179 
6180  /// @brief Return a const reference to the standard deviation of all the active values encoded in this leaf node
6182 
6183  __hostdev__ uint8_t flags() const { return DataType::mFlags; }
6184 
6185  /// @brief Return the origin in index space of this leaf node
6186  __hostdev__ CoordT origin() const { return DataType::mBBoxMin & ~MASK; }
6187 
6188  /// @brief Compute the local coordinates from a linear offset
6189  /// @param n Linear offset into this nodes dense table
6190  /// @return Local (vs global) 3D coordinates
6191  __hostdev__ static CoordT OffsetToLocalCoord(uint32_t n)
6192  {
6193  NANOVDB_ASSERT(n < SIZE);
6194  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
6195  return CoordT(n >> 2 * LOG2DIM, m >> LOG2DIM, m & MASK);
6196  }
6197 
6198  /// @brief Converts (in place) a local index coordinate to a global index coordinate
6199  __hostdev__ void localToGlobalCoord(Coord& ijk) const { ijk += this->origin(); }
6200 
6201  __hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
6202  {
6203  return OffsetToLocalCoord(n) + this->origin();
6204  }
6205 
6206  /// @brief Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!)
6207  __hostdev__ static uint32_t dim() { return 1u << LOG2DIM; }
6208 
6209  /// @brief Return the bounding box in index space of active values in this leaf node
6211  {
6213  if (this->hasBBox()) {
6214  bbox.max()[0] += DataType::mBBoxDif[0];
6215  bbox.max()[1] += DataType::mBBoxDif[1];
6216  bbox.max()[2] += DataType::mBBoxDif[2];
6217  } else { // very rare case
6218  bbox = BBox<CoordT>(); // invalid
6219  }
6220  return bbox;
6221  }
6222 
6223  /// @brief Return the total number of voxels (e.g. values) encoded in this leaf node
6224  __hostdev__ static uint32_t voxelCount() { return 1u << (3 * LOG2DIM); }
6225 
6226  __hostdev__ static uint32_t padding() { return DataType::padding(); }
6227 
6228  /// @brief return memory usage in bytes for the leaf node
6229  __hostdev__ uint64_t memUsage() const { return DataType::memUsage(); }
6230 
6231  /// @brief This class cannot be constructed or deleted
6232  LeafNode() = delete;
6233  LeafNode(const LeafNode&) = delete;
6234  LeafNode& operator=(const LeafNode&) = delete;
6235  ~LeafNode() = delete;
6236 
6237  /// @brief Return the voxel value at the given offset.
6238  __hostdev__ ValueType getValue(uint32_t offset) const { return DataType::getValue(offset); }
6239 
6240  /// @brief Return the voxel value at the given coordinate.
6241  __hostdev__ ValueType getValue(const CoordT& ijk) const { return DataType::getValue(CoordToOffset(ijk)); }
6242 
6243  /// @brief Return the first value in this leaf node.
6244  __hostdev__ ValueType getFirstValue() const { return this->getValue(0); }
6245  /// @brief Return the last value in this leaf node.
6246  __hostdev__ ValueType getLastValue() const { return this->getValue(SIZE - 1); }
6247 
6248  /// @brief Sets the value at the specified location and activate its state.
6249  ///
6250  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
6251  __hostdev__ void setValue(const CoordT& ijk, const ValueType& v) { DataType::setValue(CoordToOffset(ijk), v); }
6252 
6253  /// @brief Sets the value at the specified location but leaves its state unchanged.
6254  ///
6255  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
6256  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& v) { DataType::setValueOnly(offset, v); }
6257  __hostdev__ void setValueOnly(const CoordT& ijk, const ValueType& v) { DataType::setValueOnly(CoordToOffset(ijk), v); }
6258 
6259  /// @brief Return @c true if the voxel value at the given coordinate is active.
6260  __hostdev__ bool isActive(const CoordT& ijk) const { return DataType::mValueMask.isOn(CoordToOffset(ijk)); }
6261  __hostdev__ bool isActive(uint32_t n) const { return DataType::mValueMask.isOn(n); }
6262 
6263  /// @brief Return @c true if any of the voxel value are active in this leaf node.
6264  __hostdev__ bool isActive() const
6265  {
6266  //NANOVDB_ASSERT( bool(DataType::mFlags & uint8_t(2)) != DataType::mValueMask.isOff() );
6267  //return DataType::mFlags & uint8_t(2);
6268  return !DataType::mValueMask.isOff();
6269  }
6270 
6271  __hostdev__ bool hasBBox() const { return DataType::mFlags & uint8_t(2); }
6272 
6273  /// @brief Return @c true if the voxel value at the given coordinate is active and updates @c v with the value.
6274  __hostdev__ bool probeValue(const CoordT& ijk, ValueType& v) const
6275  {
6276  const uint32_t n = CoordToOffset(ijk);
6277  v = DataType::getValue(n);
6278  return DataType::mValueMask.isOn(n);
6279  }
6280 
6281  __hostdev__ const LeafNode* probeLeaf(const CoordT&) const { return this; }
6282 
6283  /// @brief Return the linear offset corresponding to the given coordinate
6284  __hostdev__ static uint32_t CoordToOffset(const CoordT& ijk)
6285  {
6286  return ((ijk[0] & MASK) << (2 * LOG2DIM)) | ((ijk[1] & MASK) << LOG2DIM) | (ijk[2] & MASK);
6287  }
6288 
6289  /// @brief Updates the local bounding box of active voxels in this node. Return true if bbox was updated.
6290  ///
6291  /// @warning It assumes that the origin and value mask have already been set.
6292  ///
6293  /// @details This method is based on few (intrinsic) bit operations and hence is relatively fast.
6294  /// However, it should only only be called if either the value mask has changed or if the
6295  /// active bounding box is still undefined. e.g. during construction of this node.
6296  __hostdev__ bool updateBBox();
6297 
6298  template<typename OpT, typename... ArgsT>
6299  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
6300  {
6301  return OpT::get(*this, CoordToOffset(ijk), args...);
6302  }
6303 
6304  template<typename OpT, typename... ArgsT>
6305  __hostdev__ auto get(const uint32_t n, ArgsT&&... args) const
6306  {
6307  return OpT::get(*this, n, args...);
6308  }
6309 
6310  template<typename OpT, typename... ArgsT>
6311  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
6312  {
6313  return OpT::set(*this, CoordToOffset(ijk), args...);
6314  }
6315 
6316  template<typename OpT, typename... ArgsT>
6317  __hostdev__ auto set(const uint32_t n, ArgsT&&... args)
6318  {
6319  return OpT::set(*this, n, args...);
6320  }
6321 
6322 private:
6323  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(LeafData) is misaligned");
6324 
6325  template<typename, int, int, int>
6326  friend class ReadAccessor;
6327 
6328  template<typename>
6329  friend class RootNode;
6330  template<typename, uint32_t>
6331  friend class InternalNode;
6332 
6333 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
6334  /// @brief Private method to return a voxel value and update a (dummy) ReadAccessor
6335  template<typename AccT>
6336  __hostdev__ ValueType getValueAndCache(const CoordT& ijk, const AccT&) const { return this->getValue(ijk); }
6337 
6338  /// @brief Return the node information.
6339  template<typename AccT>
6340  __hostdev__ typename AccT::NodeInfo getNodeInfoAndCache(const CoordType& /*ijk*/, const AccT& /*acc*/) const
6341  {
6342  using NodeInfoT = typename AccT::NodeInfo;
6343  return NodeInfoT{LEVEL, this->dim(), this->minimum(), this->maximum(), this->average(), this->stdDeviation(), this->bbox()[0], this->bbox()[1]};
6344  }
6345 
6346  template<typename AccT>
6347  __hostdev__ bool isActiveAndCache(const CoordT& ijk, const AccT&) const { return this->isActive(ijk); }
6348 
6349  template<typename AccT>
6350  __hostdev__ bool probeValueAndCache(const CoordT& ijk, ValueType& v, const AccT&) const { return this->probeValue(ijk, v); }
6351 
6352  template<typename AccT>
6353  __hostdev__ const LeafNode* probeLeafAndCache(const CoordT&, const AccT&) const { return this; }
6354 #endif
6355 
6356  template<typename RayT, typename AccT>
6357  __hostdev__ uint32_t getDimAndCache(const CoordT&, const RayT& /*ray*/, const AccT&) const
6358  {
6359  if (DataType::mFlags & uint8_t(1u))
6360  return this->dim(); // skip this node if the 1st bit is set
6361 
6362  //if (!ray.intersects( this->bbox() )) return 1 << LOG2DIM;
6363  return ChildNodeType::dim();
6364  }
6365 
6366  template<typename OpT, typename AccT, typename... ArgsT>
6367  __hostdev__ auto
6368  //__hostdev__ decltype(OpT::get(std::declval<const LeafNode&>(), std::declval<uint32_t>(), std::declval<ArgsT>()...))
6369  getAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args) const
6370  {
6371  return OpT::get(*this, CoordToOffset(ijk), args...);
6372  }
6373 
6374  template<typename OpT, typename AccT, typename... ArgsT>
6375  __hostdev__ auto // occasionally fails with NVCC
6376 // __hostdev__ decltype(OpT::set(std::declval<LeafNode&>(), std::declval<uint32_t>(), std::declval<ArgsT>()...))
6377  setAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args)
6378  {
6379  return OpT::set(*this, CoordToOffset(ijk), args...);
6380  }
6381 
6382 }; // LeafNode class
6383 
6384 // --------------------------> LeafNode<T>::updateBBox <------------------------------------
6385 
6386 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
6388 {
6389  static_assert(LOG2DIM == 3, "LeafNode::updateBBox: only supports LOGDIM = 3!");
6390  if (DataType::mValueMask.isOff()) {
6391  DataType::mFlags &= ~uint8_t(2); // set 2nd bit off, which indicates that this nodes has no bbox
6392  return false;
6393  }
6394  auto update = [&](uint32_t min, uint32_t max, int axis) {
6395  NANOVDB_ASSERT(min <= max && max < 8);
6396  DataType::mBBoxMin[axis] = (DataType::mBBoxMin[axis] & ~MASK) + int(min);
6397  DataType::mBBoxDif[axis] = uint8_t(max - min);
6398  };
6399  uint64_t *w = DataType::mValueMask.words(), word64 = *w;
6400  uint32_t Xmin = word64 ? 0u : 8u, Xmax = Xmin;
6401  for (int i = 1; i < 8; ++i) { // last loop over 8 64 bit words
6402  if (w[i]) { // skip if word has no set bits
6403  word64 |= w[i]; // union 8 x 64 bits words into one 64 bit word
6404  if (Xmin == 8)
6405  Xmin = i; // only set once
6406  Xmax = i;
6407  }
6408  }
6409  NANOVDB_ASSERT(word64);
6410  update(Xmin, Xmax, 0);
6411  update(FindLowestOn(word64) >> 3, FindHighestOn(word64) >> 3, 1);
6412  const uint32_t *p = reinterpret_cast<const uint32_t*>(&word64), word32 = p[0] | p[1];
6413  const uint16_t *q = reinterpret_cast<const uint16_t*>(&word32), word16 = q[0] | q[1];
6414  const uint8_t * b = reinterpret_cast<const uint8_t*>(&word16), byte = b[0] | b[1];
6416  update(FindLowestOn(static_cast<uint32_t>(byte)), FindHighestOn(static_cast<uint32_t>(byte)), 2);
6417  DataType::mFlags |= uint8_t(2); // set 2nd bit on, which indicates that this nodes has a bbox
6418  return true;
6419 } // LeafNode::updateBBox
6420 
6421 // --------------------------> Template specializations and traits <------------------------------------
6422 
6423 /// @brief Template specializations to the default configuration used in OpenVDB:
6424 /// Root -> 32^3 -> 16^3 -> 8^3
6425 template<typename BuildT>
6427 template<typename BuildT>
6429 template<typename BuildT>
6431 template<typename BuildT>
6433 template<typename BuildT>
6435 template<typename BuildT>
6437 
6438 /// @brief Trait to map from LEVEL to node type
6439 template<typename BuildT, int LEVEL>
6440 struct NanoNode;
6441 
6442 // Partial template specialization of above Node struct
6443 template<typename BuildT>
6444 struct NanoNode<BuildT, 0>
6445 {
6448 };
6449 template<typename BuildT>
6450 struct NanoNode<BuildT, 1>
6451 {
6454 };
6455 template<typename BuildT>
6456 struct NanoNode<BuildT, 2>
6457 {
6460 };
6461 template<typename BuildT>
6462 struct NanoNode<BuildT, 3>
6463 {
6466 };
6467 
6488 
6510 
6511 // --------------------------> ReadAccessor <------------------------------------
6512 
6513 /// @brief A read-only value accessor with three levels of node caching. This allows for
6514 /// inverse tree traversal during lookup, which is on average significantly faster
6515 /// than calling the equivalent method on the tree (i.e. top-down traversal).
6516 ///
6517 /// @note By virtue of the fact that a value accessor accelerates random access operations
6518 /// by re-using cached access patterns, this access should be reused for multiple access
6519 /// operations. In other words, never create an instance of this accessor for a single
6520 /// access only. In general avoid single access operations with this accessor, and
6521 /// if that is not possible call the corresponding method on the tree instead.
6522 ///
6523 /// @warning Since this ReadAccessor internally caches raw pointers to the nodes of the tree
6524 /// structure, it is not safe to copy between host and device, or even to share among
6525 /// multiple threads on the same host or device. However, it is light-weight so simple
6526 /// instantiate one per thread (on the host and/or device).
6527 ///
6528 /// @details Used to accelerated random access into a VDB tree. Provides on average
6529 /// O(1) random access operations by means of inverse tree traversal,
6530 /// which amortizes the non-const time complexity of the root node.
6531 
6532 template<typename BuildT>
6533 class ReadAccessor<BuildT, -1, -1, -1>
6534 {
6535  using GridT = NanoGrid<BuildT>; // grid
6536  using TreeT = NanoTree<BuildT>; // tree
6537  using RootT = NanoRoot<BuildT>; // root node
6538  using LeafT = NanoLeaf<BuildT>; // Leaf node
6539  using FloatType = typename RootT::FloatType;
6540  using CoordValueType = typename RootT::CoordType::ValueType;
6541 
6542  mutable const RootT* mRoot; // 8 bytes (mutable to allow for access methods to be const)
6543 public:
6544  using BuildType = BuildT;
6545  using ValueType = typename RootT::ValueType;
6546  using CoordType = typename RootT::CoordType;
6547 
6548  static const int CacheLevels = 0;
6549 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
6550  struct NodeInfo
6551  {
6552  uint32_t mLevel; // 4B
6553  uint32_t mDim; // 4B
6554  ValueType mMinimum; // typically 4B
6555  ValueType mMaximum; // typically 4B
6556  FloatType mAverage; // typically 4B
6557  FloatType mStdDevi; // typically 4B
6558  CoordType mBBoxMin; // 3*4B
6559  CoordType mBBoxMax; // 3*4B
6560  };
6561 #endif
6562  /// @brief Constructor from a root node
6564  : mRoot{&root}
6565  {
6566  }
6567 
6568  /// @brief Constructor from a grid
6570  : ReadAccessor(grid.tree().root())
6571  {
6572  }
6573 
6574  /// @brief Constructor from a tree
6576  : ReadAccessor(tree.root())
6577  {
6578  }
6579 
6580  /// @brief Reset this access to its initial state, i.e. with an empty cache
6581  /// @node Noop since this template specialization has no cache
6582  __hostdev__ void clear() {}
6583 
6584  __hostdev__ const RootT& root() const { return *mRoot; }
6585 
6586  /// @brief Defaults constructors
6587  ReadAccessor(const ReadAccessor&) = default;
6588  ~ReadAccessor() = default;
6589  ReadAccessor& operator=(const ReadAccessor&) = default;
6590 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
6592  {
6593  return this->template get<GetValue<BuildT>>(ijk);
6594  }
6595  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
6596  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
6597  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
6598  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
6599  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
6600  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
6601  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
6602 #else // NANOVDB_NEW_ACCESSOR_METHODS
6603  __hostdev__ ValueType getValue(const CoordType& ijk) const
6604  {
6605  return mRoot->getValueAndCache(ijk, *this);
6606  }
6607  __hostdev__ ValueType getValue(int i, int j, int k) const
6608  {
6609  return this->getValue(CoordType(i, j, k));
6610  }
6611  __hostdev__ ValueType operator()(const CoordType& ijk) const
6612  {
6613  return this->getValue(ijk);
6614  }
6615  __hostdev__ ValueType operator()(int i, int j, int k) const
6616  {
6617  return this->getValue(CoordType(i, j, k));
6618  }
6619 
6620  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
6621  {
6622  return mRoot->getNodeInfoAndCache(ijk, *this);
6623  }
6624 
6625  __hostdev__ bool isActive(const CoordType& ijk) const
6626  {
6627  return mRoot->isActiveAndCache(ijk, *this);
6628  }
6629 
6630  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
6631  {
6632  return mRoot->probeValueAndCache(ijk, v, *this);
6633  }
6634 
6635  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
6636  {
6637  return mRoot->probeLeafAndCache(ijk, *this);
6638  }
6639 #endif // NANOVDB_NEW_ACCESSOR_METHODS
6640  template<typename RayT>
6641  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
6642  {
6643  return mRoot->getDimAndCache(ijk, ray, *this);
6644  }
6645  template<typename OpT, typename... ArgsT>
6646  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
6647  {
6648  return mRoot->template get<OpT>(ijk, args...);
6649  }
6650 
6651  template<typename OpT, typename... ArgsT>
6652  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
6653  {
6654  return const_cast<RootT*>(mRoot)->template set<OpT>(ijk, args...);
6655  }
6656 
6657 private:
6658  /// @brief Allow nodes to insert themselves into the cache.
6659  template<typename>
6660  friend class RootNode;
6661  template<typename, uint32_t>
6662  friend class InternalNode;
6663  template<typename, typename, template<uint32_t> class, uint32_t>
6664  friend class LeafNode;
6665 
6666  /// @brief No-op
6667  template<typename NodeT>
6668  __hostdev__ void insert(const CoordType&, const NodeT*) const {}
6669 }; // ReadAccessor<ValueT, -1, -1, -1> class
6670 
6671 /// @brief Node caching at a single tree level
6672 template<typename BuildT, int LEVEL0>
6673 class ReadAccessor<BuildT, LEVEL0, -1, -1> //e.g. 0, 1, 2
6674 {
6675  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 should be 0, 1, or 2");
6676 
6677  using GridT = NanoGrid<BuildT>; // grid
6678  using TreeT = NanoTree<BuildT>;
6679  using RootT = NanoRoot<BuildT>; // root node
6680  using LeafT = NanoLeaf<BuildT>; // Leaf node
6681  using NodeT = typename NodeTrait<TreeT, LEVEL0>::type;
6682  using CoordT = typename RootT::CoordType;
6683  using ValueT = typename RootT::ValueType;
6684 
6685  using FloatType = typename RootT::FloatType;
6686  using CoordValueType = typename RootT::CoordT::ValueType;
6687 
6688  // All member data are mutable to allow for access methods to be const
6689  mutable CoordT mKey; // 3*4 = 12 bytes
6690  mutable const RootT* mRoot; // 8 bytes
6691  mutable const NodeT* mNode; // 8 bytes
6692 
6693 public:
6694  using BuildType = BuildT;
6695  using ValueType = ValueT;
6696  using CoordType = CoordT;
6697 
6698  static const int CacheLevels = 1;
6699 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
6700  using NodeInfo = typename ReadAccessor<ValueT, -1, -1, -1>::NodeInfo;
6701 #endif
6702  /// @brief Constructor from a root node
6704  : mKey(CoordType::max())
6705  , mRoot(&root)
6706  , mNode(nullptr)
6707  {
6708  }
6709 
6710  /// @brief Constructor from a grid
6712  : ReadAccessor(grid.tree().root())
6713  {
6714  }
6715 
6716  /// @brief Constructor from a tree
6718  : ReadAccessor(tree.root())
6719  {
6720  }
6721 
6722  /// @brief Reset this access to its initial state, i.e. with an empty cache
6724  {
6725  mKey = CoordType::max();
6726  mNode = nullptr;
6727  }
6728 
6729  __hostdev__ const RootT& root() const { return *mRoot; }
6730 
6731  /// @brief Defaults constructors
6732  ReadAccessor(const ReadAccessor&) = default;
6733  ~ReadAccessor() = default;
6734  ReadAccessor& operator=(const ReadAccessor&) = default;
6735 
6736  __hostdev__ bool isCached(const CoordType& ijk) const
6737  {
6738  return (ijk[0] & int32_t(~NodeT::MASK)) == mKey[0] &&
6739  (ijk[1] & int32_t(~NodeT::MASK)) == mKey[1] &&
6740  (ijk[2] & int32_t(~NodeT::MASK)) == mKey[2];
6741  }
6742 
6743 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
6745  {
6746  return this->template get<GetValue<BuildT>>(ijk);
6747  }
6748  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
6749  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
6750  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
6751  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
6752  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
6753  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
6754  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
6755 #else // NANOVDB_NEW_ACCESSOR_METHODS
6756  __hostdev__ ValueType getValue(const CoordType& ijk) const
6757  {
6758  if (this->isCached(ijk))
6759  return mNode->getValueAndCache(ijk, *this);
6760  return mRoot->getValueAndCache(ijk, *this);
6761  }
6762  __hostdev__ ValueType getValue(int i, int j, int k) const
6763  {
6764  return this->getValue(CoordType(i, j, k));
6765  }
6766  __hostdev__ ValueType operator()(const CoordType& ijk) const
6767  {
6768  return this->getValue(ijk);
6769  }
6770  __hostdev__ ValueType operator()(int i, int j, int k) const
6771  {
6772  return this->getValue(CoordType(i, j, k));
6773  }
6774 
6775  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
6776  {
6777  if (this->isCached(ijk))
6778  return mNode->getNodeInfoAndCache(ijk, *this);
6779  return mRoot->getNodeInfoAndCache(ijk, *this);
6780  }
6781 
6782  __hostdev__ bool isActive(const CoordType& ijk) const
6783  {
6784  if (this->isCached(ijk))
6785  return mNode->isActiveAndCache(ijk, *this);
6786  return mRoot->isActiveAndCache(ijk, *this);
6787  }
6788 
6789  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
6790  {
6791  if (this->isCached(ijk))
6792  return mNode->probeValueAndCache(ijk, v, *this);
6793  return mRoot->probeValueAndCache(ijk, v, *this);
6794  }
6795 
6796  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
6797  {
6798  if (this->isCached(ijk))
6799  return mNode->probeLeafAndCache(ijk, *this);
6800  return mRoot->probeLeafAndCache(ijk, *this);
6801  }
6802 #endif // NANOVDB_NEW_ACCESSOR_METHODS
6803  template<typename RayT>
6804  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
6805  {
6806  if (this->isCached(ijk))
6807  return mNode->getDimAndCache(ijk, ray, *this);
6808  return mRoot->getDimAndCache(ijk, ray, *this);
6809  }
6810 
6811  template<typename OpT, typename... ArgsT>
6812  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
6813  {
6814  if (this->isCached(ijk))
6815  return mNode->template getAndCache<OpT>(ijk, *this, args...);
6816  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
6817  }
6818 
6819  template<typename OpT, typename... ArgsT>
6820  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
6821  {
6822  if (this->isCached(ijk))
6823  return const_cast<NodeT*>(mNode)->template setAndCache<OpT>(ijk, *this, args...);
6824  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
6825  }
6826 
6827 private:
6828  /// @brief Allow nodes to insert themselves into the cache.
6829  template<typename>
6830  friend class RootNode;
6831  template<typename, uint32_t>
6832  friend class InternalNode;
6833  template<typename, typename, template<uint32_t> class, uint32_t>
6834  friend class LeafNode;
6835 
6836  /// @brief Inserts a leaf node and key pair into this ReadAccessor
6837  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
6838  {
6839  mKey = ijk & ~NodeT::MASK;
6840  mNode = node;
6841  }
6842 
6843  // no-op
6844  template<typename OtherNodeT>
6845  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
6846 
6847 }; // ReadAccessor<ValueT, LEVEL0>
6848 
6849 template<typename BuildT, int LEVEL0, int LEVEL1>
6850 class ReadAccessor<BuildT, LEVEL0, LEVEL1, -1> //e.g. (0,1), (1,2), (0,2)
6851 {
6852  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 must be 0, 1, 2");
6853  static_assert(LEVEL1 >= 0 && LEVEL1 <= 2, "LEVEL1 must be 0, 1, 2");
6854  static_assert(LEVEL0 < LEVEL1, "Level 0 must be lower than level 1");
6855  using GridT = NanoGrid<BuildT>; // grid
6856  using TreeT = NanoTree<BuildT>;
6857  using RootT = NanoRoot<BuildT>;
6858  using LeafT = NanoLeaf<BuildT>;
6859  using Node1T = typename NodeTrait<TreeT, LEVEL0>::type;
6860  using Node2T = typename NodeTrait<TreeT, LEVEL1>::type;
6861  using CoordT = typename RootT::CoordType;
6862  using ValueT = typename RootT::ValueType;
6863  using FloatType = typename RootT::FloatType;
6864  using CoordValueType = typename RootT::CoordT::ValueType;
6865 
6866  // All member data are mutable to allow for access methods to be const
6867 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
6868  mutable CoordT mKey; // 3*4 = 12 bytes
6869 #else // 68 bytes total
6870  mutable CoordT mKeys[2]; // 2*3*4 = 24 bytes
6871 #endif
6872  mutable const RootT* mRoot;
6873  mutable const Node1T* mNode1;
6874  mutable const Node2T* mNode2;
6875 
6876 public:
6877  using BuildType = BuildT;
6878  using ValueType = ValueT;
6879  using CoordType = CoordT;
6880 
6881  static const int CacheLevels = 2;
6882 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
6883  using NodeInfo = typename ReadAccessor<ValueT, -1, -1, -1>::NodeInfo;
6884 #endif
6885  /// @brief Constructor from a root node
6887 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
6888  : mKey(CoordType::max())
6889 #else
6890  : mKeys{CoordType::max(), CoordType::max()}
6891 #endif
6892  , mRoot(&root)
6893  , mNode1(nullptr)
6894  , mNode2(nullptr)
6895  {
6896  }
6897 
6898  /// @brief Constructor from a grid
6900  : ReadAccessor(grid.tree().root())
6901  {
6902  }
6903 
6904  /// @brief Constructor from a tree
6906  : ReadAccessor(tree.root())
6907  {
6908  }
6909 
6910  /// @brief Reset this access to its initial state, i.e. with an empty cache
6912  {
6913 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
6914  mKey = CoordType::max();
6915 #else
6916  mKeys[0] = mKeys[1] = CoordType::max();
6917 #endif
6918  mNode1 = nullptr;
6919  mNode2 = nullptr;
6920  }
6921 
6922  __hostdev__ const RootT& root() const { return *mRoot; }
6923 
6924  /// @brief Defaults constructors
6925  ReadAccessor(const ReadAccessor&) = default;
6926  ~ReadAccessor() = default;
6927  ReadAccessor& operator=(const ReadAccessor&) = default;
6928 
6929 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
6930  __hostdev__ bool isCached1(CoordValueType dirty) const
6931  {
6932  if (!mNode1)
6933  return false;
6934  if (dirty & int32_t(~Node1T::MASK)) {
6935  mNode1 = nullptr;
6936  return false;
6937  }
6938  return true;
6939  }
6940  __hostdev__ bool isCached2(CoordValueType dirty) const
6941  {
6942  if (!mNode2)
6943  return false;
6944  if (dirty & int32_t(~Node2T::MASK)) {
6945  mNode2 = nullptr;
6946  return false;
6947  }
6948  return true;
6949  }
6950  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
6951  {
6952  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
6953  }
6954 #else
6955  __hostdev__ bool isCached1(const CoordType& ijk) const
6956  {
6957  return (ijk[0] & int32_t(~Node1T::MASK)) == mKeys[0][0] &&
6958  (ijk[1] & int32_t(~Node1T::MASK)) == mKeys[0][1] &&
6959  (ijk[2] & int32_t(~Node1T::MASK)) == mKeys[0][2];
6960  }
6961  __hostdev__ bool isCached2(const CoordType& ijk) const
6962  {
6963  return (ijk[0] & int32_t(~Node2T::MASK)) == mKeys[1][0] &&
6964  (ijk[1] & int32_t(~Node2T::MASK)) == mKeys[1][1] &&
6965  (ijk[2] & int32_t(~Node2T::MASK)) == mKeys[1][2];
6966  }
6967 #endif
6968 
6969 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
6971  {
6972  return this->template get<GetValue<BuildT>>(ijk);
6973  }
6974  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
6975  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
6976  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
6977  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
6978  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
6979  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
6980  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
6981 #else // NANOVDB_NEW_ACCESSOR_METHODS
6982 
6983  __hostdev__ ValueType getValue(const CoordType& ijk) const
6984  {
6985 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
6986  const CoordValueType dirty = this->computeDirty(ijk);
6987 #else
6988  auto&& dirty = ijk;
6989 #endif
6990  if (this->isCached1(dirty)) {
6991  return mNode1->getValueAndCache(ijk, *this);
6992  } else if (this->isCached2(dirty)) {
6993  return mNode2->getValueAndCache(ijk, *this);
6994  }
6995  return mRoot->getValueAndCache(ijk, *this);
6996  }
6997  __hostdev__ ValueType operator()(const CoordType& ijk) const
6998  {
6999  return this->getValue(ijk);
7000  }
7001  __hostdev__ ValueType operator()(int i, int j, int k) const
7002  {
7003  return this->getValue(CoordType(i, j, k));
7004  }
7005  __hostdev__ ValueType getValue(int i, int j, int k) const
7006  {
7007  return this->getValue(CoordType(i, j, k));
7008  }
7009  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
7010  {
7011 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7012  const CoordValueType dirty = this->computeDirty(ijk);
7013 #else
7014  auto&& dirty = ijk;
7015 #endif
7016  if (this->isCached1(dirty)) {
7017  return mNode1->getNodeInfoAndCache(ijk, *this);
7018  } else if (this->isCached2(dirty)) {
7019  return mNode2->getNodeInfoAndCache(ijk, *this);
7020  }
7021  return mRoot->getNodeInfoAndCache(ijk, *this);
7022  }
7023 
7024  __hostdev__ bool isActive(const CoordType& ijk) const
7025  {
7026 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7027  const CoordValueType dirty = this->computeDirty(ijk);
7028 #else
7029  auto&& dirty = ijk;
7030 #endif
7031  if (this->isCached1(dirty)) {
7032  return mNode1->isActiveAndCache(ijk, *this);
7033  } else if (this->isCached2(dirty)) {
7034  return mNode2->isActiveAndCache(ijk, *this);
7035  }
7036  return mRoot->isActiveAndCache(ijk, *this);
7037  }
7038 
7039  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
7040  {
7041 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7042  const CoordValueType dirty = this->computeDirty(ijk);
7043 #else
7044  auto&& dirty = ijk;
7045 #endif
7046  if (this->isCached1(dirty)) {
7047  return mNode1->probeValueAndCache(ijk, v, *this);
7048  } else if (this->isCached2(dirty)) {
7049  return mNode2->probeValueAndCache(ijk, v, *this);
7050  }
7051  return mRoot->probeValueAndCache(ijk, v, *this);
7052  }
7053 
7054  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
7055  {
7056 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7057  const CoordValueType dirty = this->computeDirty(ijk);
7058 #else
7059  auto&& dirty = ijk;
7060 #endif
7061  if (this->isCached1(dirty)) {
7062  return mNode1->probeLeafAndCache(ijk, *this);
7063  } else if (this->isCached2(dirty)) {
7064  return mNode2->probeLeafAndCache(ijk, *this);
7065  }
7066  return mRoot->probeLeafAndCache(ijk, *this);
7067  }
7068 #endif // NANOVDB_NEW_ACCESSOR_METHODS
7069 
7070  template<typename RayT>
7071  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
7072  {
7073 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7074  const CoordValueType dirty = this->computeDirty(ijk);
7075 #else
7076  auto&& dirty = ijk;
7077 #endif
7078  if (this->isCached1(dirty)) {
7079  return mNode1->getDimAndCache(ijk, ray, *this);
7080  } else if (this->isCached2(dirty)) {
7081  return mNode2->getDimAndCache(ijk, ray, *this);
7082  }
7083  return mRoot->getDimAndCache(ijk, ray, *this);
7084  }
7085 
7086  template<typename OpT, typename... ArgsT>
7087  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
7088  {
7089 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7090  const CoordValueType dirty = this->computeDirty(ijk);
7091 #else
7092  auto&& dirty = ijk;
7093 #endif
7094  if (this->isCached1(dirty)) {
7095  return mNode1->template getAndCache<OpT>(ijk, *this, args...);
7096  } else if (this->isCached2(dirty)) {
7097  return mNode2->template getAndCache<OpT>(ijk, *this, args...);
7098  }
7099  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
7100  }
7101 
7102  template<typename OpT, typename... ArgsT>
7103  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
7104  {
7105 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7106  const CoordValueType dirty = this->computeDirty(ijk);
7107 #else
7108  auto&& dirty = ijk;
7109 #endif
7110  if (this->isCached1(dirty)) {
7111  return const_cast<Node1T*>(mNode1)->template setAndCache<OpT>(ijk, *this, args...);
7112  } else if (this->isCached2(dirty)) {
7113  return const_cast<Node2T*>(mNode2)->template setAndCache<OpT>(ijk, *this, args...);
7114  }
7115  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
7116  }
7117 
7118 private:
7119  /// @brief Allow nodes to insert themselves into the cache.
7120  template<typename>
7121  friend class RootNode;
7122  template<typename, uint32_t>
7123  friend class InternalNode;
7124  template<typename, typename, template<uint32_t> class, uint32_t>
7125  friend class LeafNode;
7126 
7127  /// @brief Inserts a leaf node and key pair into this ReadAccessor
7128  __hostdev__ void insert(const CoordType& ijk, const Node1T* node) const
7129  {
7130 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7131  mKey = ijk;
7132 #else
7133  mKeys[0] = ijk & ~Node1T::MASK;
7134 #endif
7135  mNode1 = node;
7136  }
7137  __hostdev__ void insert(const CoordType& ijk, const Node2T* node) const
7138  {
7139 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7140  mKey = ijk;
7141 #else
7142  mKeys[1] = ijk & ~Node2T::MASK;
7143 #endif
7144  mNode2 = node;
7145  }
7146  template<typename OtherNodeT>
7147  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
7148 }; // ReadAccessor<BuildT, LEVEL0, LEVEL1>
7149 
7150 /// @brief Node caching at all (three) tree levels
7151 template<typename BuildT>
7152 class ReadAccessor<BuildT, 0, 1, 2>
7153 {
7154  using GridT = NanoGrid<BuildT>; // grid
7155  using TreeT = NanoTree<BuildT>;
7156  using RootT = NanoRoot<BuildT>; // root node
7157  using NodeT2 = NanoUpper<BuildT>; // upper internal node
7158  using NodeT1 = NanoLower<BuildT>; // lower internal node
7159  using LeafT = NanoLeaf<BuildT>; // Leaf node
7160  using CoordT = typename RootT::CoordType;
7161  using ValueT = typename RootT::ValueType;
7162 
7163  using FloatType = typename RootT::FloatType;
7164  using CoordValueType = typename RootT::CoordT::ValueType;
7165 
7166  // All member data are mutable to allow for access methods to be const
7167 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
7168  mutable CoordT mKey; // 3*4 = 12 bytes
7169 #else // 68 bytes total
7170  mutable CoordT mKeys[3]; // 3*3*4 = 36 bytes
7171 #endif
7172  mutable const RootT* mRoot;
7173  mutable const void* mNode[3]; // 4*8 = 32 bytes
7174 
7175 public:
7176  using BuildType = BuildT;
7177  using ValueType = ValueT;
7178  using CoordType = CoordT;
7179 
7180  static const int CacheLevels = 3;
7181 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
7182  using NodeInfo = typename ReadAccessor<ValueT, -1, -1, -1>::NodeInfo;
7183 #endif
7184  /// @brief Constructor from a root node
7186 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7187  : mKey(CoordType::max())
7188 #else
7190 #endif
7191  , mRoot(&root)
7192  , mNode{nullptr, nullptr, nullptr}
7193  {
7194  }
7195 
7196  /// @brief Constructor from a grid
7198  : ReadAccessor(grid.tree().root())
7199  {
7200  }
7201 
7202  /// @brief Constructor from a tree
7204  : ReadAccessor(tree.root())
7205  {
7206  }
7207 
7208  __hostdev__ const RootT& root() const { return *mRoot; }
7209 
7210  /// @brief Defaults constructors
7211  ReadAccessor(const ReadAccessor&) = default;
7212  ~ReadAccessor() = default;
7213  ReadAccessor& operator=(const ReadAccessor&) = default;
7214 
7215  /// @brief Return a const point to the cached node of the specified type
7216  ///
7217  /// @warning The return value could be NULL.
7218  template<typename NodeT>
7219  __hostdev__ const NodeT* getNode() const
7220  {
7221  using T = typename NodeTrait<TreeT, NodeT::LEVEL>::type;
7222  static_assert(is_same<T, NodeT>::value, "ReadAccessor::getNode: Invalid node type");
7223  return reinterpret_cast<const T*>(mNode[NodeT::LEVEL]);
7224  }
7225 
7226  template<int LEVEL>
7228  {
7229  using T = typename NodeTrait<TreeT, LEVEL>::type;
7230  static_assert(LEVEL >= 0 && LEVEL <= 2, "ReadAccessor::getNode: Invalid node type");
7231  return reinterpret_cast<const T*>(mNode[LEVEL]);
7232  }
7233 
7234  /// @brief Reset this access to its initial state, i.e. with an empty cache
7236  {
7237 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7238  mKey = CoordType::max();
7239 #else
7240  mKeys[0] = mKeys[1] = mKeys[2] = CoordType::max();
7241 #endif
7242  mNode[0] = mNode[1] = mNode[2] = nullptr;
7243  }
7244 
7245 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7246  template<typename NodeT>
7247  __hostdev__ bool isCached(CoordValueType dirty) const
7248  {
7249  if (!mNode[NodeT::LEVEL])
7250  return false;
7251  if (dirty & int32_t(~NodeT::MASK)) {
7252  mNode[NodeT::LEVEL] = nullptr;
7253  return false;
7254  }
7255  return true;
7256  }
7257 
7258  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
7259  {
7260  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
7261  }
7262 #else
7263  template<typename NodeT>
7264  __hostdev__ bool isCached(const CoordType& ijk) const
7265  {
7266  return (ijk[0] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][0] &&
7267  (ijk[1] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][1] &&
7268  (ijk[2] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][2];
7269  }
7270 #endif
7271 
7272 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
7274  {
7275  return this->template get<GetValue<BuildT>>(ijk);
7276  }
7277  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
7278  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
7279  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
7280  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
7281  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
7282  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
7283  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
7284 #else // NANOVDB_NEW_ACCESSOR_METHODS
7285 
7286  __hostdev__ ValueType getValue(const CoordType& ijk) const
7287  {
7288 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7289  const CoordValueType dirty = this->computeDirty(ijk);
7290 #else
7291  auto&& dirty = ijk;
7292 #endif
7293  if (this->isCached<LeafT>(dirty)) {
7294  return ((LeafT*)mNode[0])->getValue(ijk);
7295  } else if (this->isCached<NodeT1>(dirty)) {
7296  return ((NodeT1*)mNode[1])->getValueAndCache(ijk, *this);
7297  } else if (this->isCached<NodeT2>(dirty)) {
7298  return ((NodeT2*)mNode[2])->getValueAndCache(ijk, *this);
7299  }
7300  return mRoot->getValueAndCache(ijk, *this);
7301  }
7302  __hostdev__ ValueType operator()(const CoordType& ijk) const
7303  {
7304  return this->getValue(ijk);
7305  }
7306  __hostdev__ ValueType operator()(int i, int j, int k) const
7307  {
7308  return this->getValue(CoordType(i, j, k));
7309  }
7310  __hostdev__ ValueType getValue(int i, int j, int k) const
7311  {
7312  return this->getValue(CoordType(i, j, k));
7313  }
7314 
7315  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
7316  {
7317 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7318  const CoordValueType dirty = this->computeDirty(ijk);
7319 #else
7320  auto&& dirty = ijk;
7321 #endif
7322  if (this->isCached<LeafT>(dirty)) {
7323  return ((LeafT*)mNode[0])->getNodeInfoAndCache(ijk, *this);
7324  } else if (this->isCached<NodeT1>(dirty)) {
7325  return ((NodeT1*)mNode[1])->getNodeInfoAndCache(ijk, *this);
7326  } else if (this->isCached<NodeT2>(dirty)) {
7327  return ((NodeT2*)mNode[2])->getNodeInfoAndCache(ijk, *this);
7328  }
7329  return mRoot->getNodeInfoAndCache(ijk, *this);
7330  }
7331 
7332  __hostdev__ bool isActive(const CoordType& ijk) const
7333  {
7334 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7335  const CoordValueType dirty = this->computeDirty(ijk);
7336 #else
7337  auto&& dirty = ijk;
7338 #endif
7339  if (this->isCached<LeafT>(dirty)) {
7340  return ((LeafT*)mNode[0])->isActive(ijk);
7341  } else if (this->isCached<NodeT1>(dirty)) {
7342  return ((NodeT1*)mNode[1])->isActiveAndCache(ijk, *this);
7343  } else if (this->isCached<NodeT2>(dirty)) {
7344  return ((NodeT2*)mNode[2])->isActiveAndCache(ijk, *this);
7345  }
7346  return mRoot->isActiveAndCache(ijk, *this);
7347  }
7348 
7349  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
7350  {
7351 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7352  const CoordValueType dirty = this->computeDirty(ijk);
7353 #else
7354  auto&& dirty = ijk;
7355 #endif
7356  if (this->isCached<LeafT>(dirty)) {
7357  return ((LeafT*)mNode[0])->probeValue(ijk, v);
7358  } else if (this->isCached<NodeT1>(dirty)) {
7359  return ((NodeT1*)mNode[1])->probeValueAndCache(ijk, v, *this);
7360  } else if (this->isCached<NodeT2>(dirty)) {
7361  return ((NodeT2*)mNode[2])->probeValueAndCache(ijk, v, *this);
7362  }
7363  return mRoot->probeValueAndCache(ijk, v, *this);
7364  }
7365  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
7366  {
7367 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7368  const CoordValueType dirty = this->computeDirty(ijk);
7369 #else
7370  auto&& dirty = ijk;
7371 #endif
7372  if (this->isCached<LeafT>(dirty)) {
7373  return ((LeafT*)mNode[0]);
7374  } else if (this->isCached<NodeT1>(dirty)) {
7375  return ((NodeT1*)mNode[1])->probeLeafAndCache(ijk, *this);
7376  } else if (this->isCached<NodeT2>(dirty)) {
7377  return ((NodeT2*)mNode[2])->probeLeafAndCache(ijk, *this);
7378  }
7379  return mRoot->probeLeafAndCache(ijk, *this);
7380  }
7381 #endif // NANOVDB_NEW_ACCESSOR_METHODS
7382 
7383  template<typename OpT, typename... ArgsT>
7384  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
7385  {
7386 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7387  const CoordValueType dirty = this->computeDirty(ijk);
7388 #else
7389  auto&& dirty = ijk;
7390 #endif
7391  if (this->isCached<LeafT>(dirty)) {
7392  return ((const LeafT*)mNode[0])->template getAndCache<OpT>(ijk, *this, args...);
7393  } else if (this->isCached<NodeT1>(dirty)) {
7394  return ((const NodeT1*)mNode[1])->template getAndCache<OpT>(ijk, *this, args...);
7395  } else if (this->isCached<NodeT2>(dirty)) {
7396  return ((const NodeT2*)mNode[2])->template getAndCache<OpT>(ijk, *this, args...);
7397  }
7398  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
7399  }
7400 
7401  template<typename OpT, typename... ArgsT>
7402  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
7403  {
7404 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7405  const CoordValueType dirty = this->computeDirty(ijk);
7406 #else
7407  auto&& dirty = ijk;
7408 #endif
7409  if (this->isCached<LeafT>(dirty)) {
7410  return ((LeafT*)mNode[0])->template setAndCache<OpT>(ijk, *this, args...);
7411  } else if (this->isCached<NodeT1>(dirty)) {
7412  return ((NodeT1*)mNode[1])->template setAndCache<OpT>(ijk, *this, args...);
7413  } else if (this->isCached<NodeT2>(dirty)) {
7414  return ((NodeT2*)mNode[2])->template setAndCache<OpT>(ijk, *this, args...);
7415  }
7416  return ((RootT*)mRoot)->template setAndCache<OpT>(ijk, *this, args...);
7417  }
7418 
7419  template<typename RayT>
7420  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
7421  {
7422 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7423  const CoordValueType dirty = this->computeDirty(ijk);
7424 #else
7425  auto&& dirty = ijk;
7426 #endif
7427  if (this->isCached<LeafT>(dirty)) {
7428  return ((LeafT*)mNode[0])->getDimAndCache(ijk, ray, *this);
7429  } else if (this->isCached<NodeT1>(dirty)) {
7430  return ((NodeT1*)mNode[1])->getDimAndCache(ijk, ray, *this);
7431  } else if (this->isCached<NodeT2>(dirty)) {
7432  return ((NodeT2*)mNode[2])->getDimAndCache(ijk, ray, *this);
7433  }
7434  return mRoot->getDimAndCache(ijk, ray, *this);
7435  }
7436 
7437 private:
7438  /// @brief Allow nodes to insert themselves into the cache.
7439  template<typename>
7440  friend class RootNode;
7441  template<typename, uint32_t>
7442  friend class InternalNode;
7443  template<typename, typename, template<uint32_t> class, uint32_t>
7444  friend class LeafNode;
7445 
7446  /// @brief Inserts a leaf node and key pair into this ReadAccessor
7447  template<typename NodeT>
7448  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
7449  {
7450 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
7451  mKey = ijk;
7452 #else
7453  mKeys[NodeT::LEVEL] = ijk & ~NodeT::MASK;
7454 #endif
7455  mNode[NodeT::LEVEL] = node;
7456  }
7457 }; // ReadAccessor<BuildT, 0, 1, 2>
7458 
7459 //////////////////////////////////////////////////
7460 
7461 /// @brief Free-standing function for convenient creation of a ReadAccessor with
7462 /// optional and customizable node caching.
7463 ///
7464 /// @details createAccessor<>(grid): No caching of nodes and hence it's thread-safe but slow
7465 /// createAccessor<0>(grid): Caching of leaf nodes only
7466 /// createAccessor<1>(grid): Caching of lower internal nodes only
7467 /// createAccessor<2>(grid): Caching of upper internal nodes only
7468 /// createAccessor<0,1>(grid): Caching of leaf and lower internal nodes
7469 /// createAccessor<0,2>(grid): Caching of leaf and upper internal nodes
7470 /// createAccessor<1,2>(grid): Caching of lower and upper internal nodes
7471 /// createAccessor<0,1,2>(grid): Caching of all nodes at all tree levels
7472 
7473 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
7475 {
7477 }
7478 
7479 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
7481 {
7483 }
7484 
7485 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
7487 {
7489 }
7490 
7491 //////////////////////////////////////////////////
7492 
7493 /// @brief This is a convenient class that allows for access to grid meta-data
7494 /// that are independent of the value type of a grid. That is, this class
7495 /// can be used to get information about a grid without actually knowing
7496 /// its ValueType.
7498 { // 768 bytes (32 byte aligned)
7499  GridData mGridData; // 672B
7500  TreeData mTreeData; // 64B
7501  CoordBBox mIndexBBox; // 24B. AABB of active values in index space.
7502  uint32_t mRootTableSize, mPadding{0}; // 8B
7503 
7504 public:
7505  template<typename T>
7507  {
7508  mGridData = *grid.data();
7509  mTreeData = *grid.tree().data();
7510  mIndexBBox = grid.indexBBox();
7511  mRootTableSize = grid.tree().root().getTableSize();
7512  }
7513  GridMetaData(const GridData* gridData)
7514  {
7515  static_assert(8 * 96 == sizeof(GridMetaData), "GridMetaData has unexpected size");
7516  if (GridMetaData::safeCast(gridData)) {
7517  memcpy64(this, gridData, 96);
7518  } else {// otherwise copy each member individually
7519  mGridData = *gridData;
7520  mTreeData = *reinterpret_cast<const TreeData*>(gridData->treePtr());
7521  mIndexBBox = gridData->indexBBox();
7522  mRootTableSize = gridData->rootTableSize();
7523  }
7524  }
7525  /// @brief return true if the RootData follows right after the TreeData.
7526  /// If so, this implies that it's safe to cast the grid from which
7527  /// this instance was constructed to a GridMetaData
7528  __hostdev__ bool safeCast() const { return mTreeData.isRootNext(); }
7529 
7530  /// @brief return true if it is safe to cast the grid to a pointer
7531  /// of type GridMetaData, i.e. construction can be avoided.
7532  __hostdev__ static bool safeCast(const GridData *gridData){
7533  NANOVDB_ASSERT(gridData && gridData->isValid());
7534  return gridData->isRootConnected();
7535  }
7536  /// @brief return true if it is safe to cast the grid to a pointer
7537  /// of type GridMetaData, i.e. construction can be avoided.
7538  template<typename T>
7539  __hostdev__ static bool safeCast(const NanoGrid<T>& grid){return grid.tree().isRootNext();}
7540  __hostdev__ bool isValid() const { return mGridData.isValid(); }
7541  __hostdev__ const GridType& gridType() const { return mGridData.mGridType; }
7542  __hostdev__ const GridClass& gridClass() const { return mGridData.mGridClass; }
7543  __hostdev__ bool isLevelSet() const { return mGridData.mGridClass == GridClass::LevelSet; }
7544  __hostdev__ bool isFogVolume() const { return mGridData.mGridClass == GridClass::FogVolume; }
7545  __hostdev__ bool isStaggered() const { return mGridData.mGridClass == GridClass::Staggered; }
7546  __hostdev__ bool isPointIndex() const { return mGridData.mGridClass == GridClass::PointIndex; }
7547  __hostdev__ bool isGridIndex() const { return mGridData.mGridClass == GridClass::IndexGrid; }
7548  __hostdev__ bool isPointData() const { return mGridData.mGridClass == GridClass::PointData; }
7549  __hostdev__ bool isMask() const { return mGridData.mGridClass == GridClass::Topology; }
7550  __hostdev__ bool isUnknown() const { return mGridData.mGridClass == GridClass::Unknown; }
7551  __hostdev__ bool hasMinMax() const { return mGridData.mFlags.isMaskOn(GridFlags::HasMinMax); }
7552  __hostdev__ bool hasBBox() const { return mGridData.mFlags.isMaskOn(GridFlags::HasBBox); }
7553  __hostdev__ bool hasLongGridName() const { return mGridData.mFlags.isMaskOn(GridFlags::HasLongGridName); }
7554  __hostdev__ bool hasAverage() const { return mGridData.mFlags.isMaskOn(GridFlags::HasAverage); }
7555  __hostdev__ bool hasStdDeviation() const { return mGridData.mFlags.isMaskOn(GridFlags::HasStdDeviation); }
7556  __hostdev__ bool isBreadthFirst() const { return mGridData.mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
7557  __hostdev__ uint64_t gridSize() const { return mGridData.mGridSize; }
7558  __hostdev__ uint32_t gridIndex() const { return mGridData.mGridIndex; }
7559  __hostdev__ uint32_t gridCount() const { return mGridData.mGridCount; }
7560  __hostdev__ const char* shortGridName() const { return mGridData.mGridName; }
7561  __hostdev__ const Map& map() const { return mGridData.mMap; }
7562  __hostdev__ const BBox<Vec3d>& worldBBox() const { return mGridData.mWorldBBox; }
7563  __hostdev__ const BBox<Coord>& indexBBox() const { return mIndexBBox; }
7564  __hostdev__ Vec3d voxelSize() const { return mGridData.mVoxelSize; }
7565  __hostdev__ int blindDataCount() const { return mGridData.mBlindMetadataCount; }
7566  __hostdev__ uint64_t activeVoxelCount() const { return mTreeData.mVoxelCount; }
7567  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const { return mTreeData.mTileCount[level - 1]; }
7568  __hostdev__ uint32_t nodeCount(uint32_t level) const { return mTreeData.mNodeCount[level]; }
7569  __hostdev__ uint64_t checksum() const { return mGridData.mChecksum; }
7570  __hostdev__ uint32_t rootTableSize() const { return mRootTableSize; }
7571  __hostdev__ bool isEmpty() const { return mRootTableSize == 0; }
7572  __hostdev__ Version version() const { return mGridData.mVersion; }
7573 }; // GridMetaData
7574 
7575 /// @brief Class to access points at a specific voxel location
7576 ///
7577 /// @note If GridClass::PointIndex AttT should be uint32_t and if GridClass::PointData Vec3f
7578 template<typename AttT, typename BuildT = uint32_t>
7579 class PointAccessor : public DefaultReadAccessor<BuildT>
7580 {
7582  const NanoGrid<BuildT>& mGrid;
7583  const AttT* mData;
7584 
7585 public:
7587  : AccT(grid.tree().root())
7588  , mGrid(grid)
7589  , mData(grid.template getBlindData<AttT>(0))
7590  {
7591  NANOVDB_ASSERT(grid.gridType() == mapToGridType<BuildT>());
7594  }
7595 
7596  /// @brief return true if this access was initialized correctly
7597  __hostdev__ operator bool() const { return mData != nullptr; }
7598 
7599  __hostdev__ const NanoGrid<BuildT>& grid() const { return mGrid; }
7600 
7601  /// @brief Return the total number of point in the grid and set the
7602  /// iterators to the complete range of points.
7603  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
7604  {
7605  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
7606  begin = mData;
7607  end = begin + count;
7608  return count;
7609  }
7610  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
7611  /// If this return value is larger than zero then the iterators @a begin and @a end
7612  /// will point to all the attributes contained within that leaf node.
7613  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
7614  {
7615  auto* leaf = this->probeLeaf(ijk);
7616  if (leaf == nullptr) {
7617  return 0;
7618  }
7619  begin = mData + leaf->minimum();
7620  end = begin + leaf->maximum();
7621  return leaf->maximum();
7622  }
7623 
7624  /// @brief get iterators over attributes to points at a specific voxel location
7625  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
7626  {
7627  begin = end = nullptr;
7628  if (auto* leaf = this->probeLeaf(ijk)) {
7629  const uint32_t offset = NanoLeaf<BuildT>::CoordToOffset(ijk);
7630  if (leaf->isActive(offset)) {
7631  begin = mData + leaf->minimum();
7632  end = begin + leaf->getValue(offset);
7633  if (offset > 0u)
7634  begin += leaf->getValue(offset - 1);
7635  }
7636  }
7637  return end - begin;
7638  }
7639 }; // PointAccessor
7640 
7641 template<typename AttT>
7642 class PointAccessor<AttT, Point> : public DefaultReadAccessor<Point>
7643 {
7645  const NanoGrid<Point>& mGrid;
7646  const AttT* mData;
7647 
7648 public:
7650  : AccT(grid.tree().root())
7651  , mGrid(grid)
7652  , mData(grid.template getBlindData<AttT>(0))
7653  {
7654  NANOVDB_ASSERT(mData);
7661  }
7662 
7663  /// @brief return true if this access was initialized correctly
7664  __hostdev__ operator bool() const { return mData != nullptr; }
7665 
7666  __hostdev__ const NanoGrid<Point>& grid() const { return mGrid; }
7667 
7668  /// @brief Return the total number of point in the grid and set the
7669  /// iterators to the complete range of points.
7670  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
7671  {
7672  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
7673  begin = mData;
7674  end = begin + count;
7675  return count;
7676  }
7677  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
7678  /// If this return value is larger than zero then the iterators @a begin and @a end
7679  /// will point to all the attributes contained within that leaf node.
7680  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
7681  {
7682  auto* leaf = this->probeLeaf(ijk);
7683  if (leaf == nullptr)
7684  return 0;
7685  begin = mData + leaf->offset();
7686  end = begin + leaf->pointCount();
7687  return leaf->pointCount();
7688  }
7689 
7690  /// @brief get iterators over attributes to points at a specific voxel location
7691  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
7692  {
7693  if (auto* leaf = this->probeLeaf(ijk)) {
7694  const uint32_t n = NanoLeaf<Point>::CoordToOffset(ijk);
7695  if (leaf->isActive(n)) {
7696  begin = mData + leaf->first(n);
7697  end = mData + leaf->last(n);
7698  return end - begin;
7699  }
7700  }
7701  begin = end = nullptr;
7702  return 0u; // no leaf or inactive voxel
7703  }
7704 }; // PointAccessor<AttT, Point>
7705 
7706 /// @brief Class to access values in channels at a specific voxel location.
7707 ///
7708 /// @note The ChannelT template parameter can be either const and non-const.
7709 template<typename ChannelT, typename IndexT = ValueIndex>
7710 class ChannelAccessor : public DefaultReadAccessor<IndexT>
7711 {
7712  static_assert(BuildTraits<IndexT>::is_index, "Expected an index build type");
7714 
7715  const NanoGrid<IndexT>& mGrid;
7716  ChannelT* mChannel;
7717 
7718 public:
7719  using ValueType = ChannelT;
7722 
7723  /// @brief Ctor from an IndexGrid and an integer ID of an internal channel
7724  /// that is assumed to exist as blind data in the IndexGrid.
7725  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, uint32_t channelID = 0u)
7726  : BaseT(grid.tree().root())
7727  , mGrid(grid)
7728  , mChannel(nullptr)
7729  {
7730  NANOVDB_ASSERT(isIndex(grid.gridType()));
7732  this->setChannel(channelID);
7733  }
7734 
7735  /// @brief Ctor from an IndexGrid and an external channel
7736  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, ChannelT* channelPtr)
7737  : BaseT(grid.tree().root())
7738  , mGrid(grid)
7739  , mChannel(channelPtr)
7740  {
7741  NANOVDB_ASSERT(isIndex(grid.gridType()));
7743  }
7744 
7745  /// @brief return true if this access was initialized correctly
7746  __hostdev__ operator bool() const { return mChannel != nullptr; }
7747 
7748  /// @brief Return a const reference to the IndexGrid
7749  __hostdev__ const NanoGrid<IndexT>& grid() const { return mGrid; }
7750 
7751  /// @brief Return a const reference to the tree of the IndexGrid
7752  __hostdev__ const TreeType& tree() const { return mGrid.tree(); }
7753 
7754  /// @brief Return a vector of the axial voxel sizes
7755  __hostdev__ const Vec3d& voxelSize() const { return mGrid.voxelSize(); }
7756 
7757  /// @brief Return total number of values indexed by the IndexGrid
7758  __hostdev__ const uint64_t& valueCount() const { return mGrid.valueCount(); }
7759 
7760  /// @brief Change to an external channel
7761  /// @return Pointer to channel data
7762  __hostdev__ ChannelT* setChannel(ChannelT* channelPtr) {return mChannel = channelPtr;}
7763 
7764  /// @brief Change to an internal channel, assuming it exists as as blind data
7765  /// in the IndexGrid.
7766  /// @return Pointer to channel data, which could be NULL if channelID is out of range or
7767  /// if ChannelT does not match the value type of the blind data
7768  __hostdev__ ChannelT* setChannel(uint32_t channelID)
7769  {
7770  return mChannel = const_cast<ChannelT*>(mGrid.template getBlindData<ChannelT>(channelID));
7771  }
7772 
7773  /// @brief Return the linear offset into a channel that maps to the specified coordinate
7774  __hostdev__ uint64_t getIndex(const Coord& ijk) const { return BaseT::getValue(ijk); }
7775  __hostdev__ uint64_t idx(int i, int j, int k) const { return BaseT::getValue(Coord(i, j, k)); }
7776 
7777  /// @brief Return the value from a cached channel that maps to the specified coordinate
7778  __hostdev__ ChannelT& getValue(const Coord& ijk) const { return mChannel[BaseT::getValue(ijk)]; }
7779  __hostdev__ ChannelT& operator()(const Coord& ijk) const { return this->getValue(ijk); }
7780  __hostdev__ ChannelT& operator()(int i, int j, int k) const { return this->getValue(Coord(i, j, k)); }
7781 
7782  /// @brief return the state and updates the value of the specified voxel
7783  __hostdev__ bool probeValue(const Coord& ijk, typename remove_const<ChannelT>::type& v) const
7784  {
7785  uint64_t idx;
7786  const bool isActive = BaseT::probeValue(ijk, idx);
7787  v = mChannel[idx];
7788  return isActive;
7789  }
7790  /// @brief Return the value from a specified channel that maps to the specified coordinate
7791  ///
7792  /// @note The template parameter can be either const or non-const
7793  template<typename T>
7794  __hostdev__ T& getValue(const Coord& ijk, T* channelPtr) const { return channelPtr[BaseT::getValue(ijk)]; }
7795 
7796 }; // ChannelAccessor
7797 
7798 #if 0
7799 // This MiniGridHandle class is only included as a stand-alone example. Note that aligned_alloc is a C++17 feature!
7800 // Normally we recommend using GridHandle defined in util/GridHandle.h but this minimal implementation could be an
7801 // alternative when using the IO medthods defined below.
7802 struct MiniGridHandle {
7803  struct BufferType {
7804  uint8_t *data;
7805  uint64_t size;
7806  BufferType(uint64_t n=0) : data(std::aligned_alloc(NANOVDB_DATA_ALIGNMENT, n)), size(n) {assert(isValid(data));}
7807  BufferType(BufferType &&other) : data(other.data), size(other.size) {other.data=nullptr; other.size=0;}
7808  ~BufferType() {std::free(data);}
7809  BufferType& operator=(const BufferType &other) = delete;
7810  BufferType& operator=(BufferType &&other){data=other.data; size=other.size; other.data=nullptr; other.size=0; return *this;}
7811  static BufferType create(size_t n, BufferType* dummy = nullptr) {return BufferType(n);}
7812  } buffer;
7813  MiniGridHandle(BufferType &&buf) : buffer(std::move(buf)) {}
7814  const uint8_t* data() const {return buffer.data;}
7815 };// MiniGridHandle
7816 #endif
7817 
7818 namespace io {
7819 
7820 /// @brief Define compression codecs
7821 ///
7822 /// @note NONE is the default, ZIP is slow but compact and BLOSC offers a great balance.
7823 ///
7824 /// @throw NanoVDB optionally supports ZIP and BLOSC compression and will throw an exception
7825 /// if its support is required but missing.
7826 enum class Codec : uint16_t { NONE = 0,
7827  ZIP = 1,
7828  BLOSC = 2,
7829  END = 3 };
7830 
7831 /// @brief Data encoded at the head of each segment of a file or stream.
7832 ///
7833 /// @note A file or stream is composed of one or more segments that each contain
7834 // one or more grids.
7835 struct FileHeader {// 16 bytes
7836  uint64_t magic;// 8 bytes
7837  Version version;// 4 bytes version numbers
7838  uint16_t gridCount;// 2 bytes
7839  Codec codec;// 2 bytes
7840  bool isValid() const {return magic == NANOVDB_MAGIC_NUMBER || magic == NANOVDB_MAGIC_FILE;}
7841 }; // FileHeader ( 16 bytes = 2 words )
7842 
7843 // @brief Data encoded for each of the grids associated with a segment.
7844 // Grid size in memory (uint64_t) |
7845 // Grid size on disk (uint64_t) |
7846 // Grid name hash key (uint64_t) |
7847 // Numer of active voxels (uint64_t) |
7848 // Grid type (uint32_t) |
7849 // Grid class (uint32_t) |
7850 // Characters in grid name (uint32_t) |
7851 // AABB in world space (2*3*double) | one per grid in file
7852 // AABB in index space (2*3*int) |
7853 // Size of a voxel in world units (3*double) |
7854 // Byte size of the grid name (uint32_t) |
7855 // Number of nodes per level (4*uint32_t) |
7856 // Numer of active tiles per level (3*uint32_t) |
7857 // Codec for file compression (uint16_t) |
7858 // Padding due to 8B alignment (uint16_t) |
7859 // Version number (uint32_t) |
7861 {// 176 bytes
7862  uint64_t gridSize, fileSize, nameKey, voxelCount; // 4 * 8 = 32B.
7865  BBox<Vec3d> worldBBox; // 2 * 3 * 8 = 48B.
7866  CoordBBox indexBBox; // 2 * 3 * 4 = 24B.
7867  Vec3d voxelSize; // 24B.
7868  uint32_t nameSize; // 4B.
7869  uint32_t nodeCount[4]; //4 x 4 = 16B
7870  uint32_t tileCount[3];// 3 x 4 = 12B
7871  Codec codec; // 2B
7872  uint16_t padding;// 2B, due to 8B alignment from uint64_t
7874 }; // FileMetaData
7875 
7876 // the following code block uses std and therefore needs to be ignored by CUDA and HIP
7877 #if !defined(__CUDA_ARCH__) && !defined(__HIP__)
7878 
7879 inline const char* toStr(Codec codec)
7880 {
7881  static const char * LUT[] = { "NONE", "ZIP", "BLOSC" , "END" };
7882  static_assert(sizeof(LUT) / sizeof(char*) - 1 == int(Codec::END), "Unexpected size of LUT");
7883  return LUT[static_cast<int>(codec)];
7884 }
7885 
7886 // Note that starting with version 32.6.0 it is possible to write and read raw grid buffers to
7887 // files, e.g. os.write((const char*)&buffer.data(), buffer.size()) or more conveniently as
7888 // handle.write(fileName). In addition to this simple approach we offer the methods below to
7889 // write traditional uncompressed nanovdb files that unlike raw files include metadata that
7890 // is used for tools like nanovdb_print.
7891 
7892 ///
7893 /// @brief This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h
7894 /// Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also
7895 /// works if client code only includes PNanoVDB.h!
7896 ///
7897 /// @details Writes a raw NanoVDB buffer, possibly with multiple grids, to a stream WITHOUT compression.
7898 /// It follows all the conventions in util/IO.h so the stream can be read by all existing client
7899 /// code of NanoVDB.
7900 ///
7901 /// @note This method will always write uncompressed grids to the stream, i.e. Blosc or ZIP compression
7902 /// is never applied! This is a fundamental limitation and feature of this standalone function.
7903 ///
7904 /// @throw std::invalid_argument if buffer does not point to a valid NanoVDB grid.
7905 ///
7906 /// @warning This is pretty ugly code that involves lots of pointer and bit manipulations - not for the faint of heart :)
7907 template<typename StreamT> // StreamT class must support: "void write(const char*, size_t)"
7908 void writeUncompressedGrid(StreamT& os, const GridData* gridData, bool raw = false)
7909 {
7910  NANOVDB_ASSERT(gridData->mMagic == NANOVDB_MAGIC_NUMBER || gridData->mMagic == NANOVDB_MAGIC_GRID);
7911  NANOVDB_ASSERT(gridData->mVersion.isCompatible());
7912  if (!raw) {// segment with a single grid: FileHeader, FileMetaData, gridName, Grid
7913 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
7914  FileHeader head{NANOVDB_MAGIC_FILE, gridData->mVersion, 1u, Codec::NONE};
7915 #else
7916  FileHeader head{NANOVDB_MAGIC_NUMBER, gridData->mVersion, 1u, Codec::NONE};
7917 #endif
7918  const char* gridName = gridData->gridName();
7919  uint32_t nameSize = 1; // '\0'
7920  for (const char* p = gridName; *p != '\0'; ++p) ++nameSize;
7921  const TreeData* treeData = (const TreeData*)gridData->treePtr();
7922  FileMetaData meta{gridData->mGridSize, gridData->mGridSize, 0u, treeData->mVoxelCount,
7923  gridData->mGridType, gridData->mGridClass, gridData->mWorldBBox,
7924  treeData->bbox(), gridData->mVoxelSize, nameSize,
7925  {treeData->mNodeCount[0], treeData->mNodeCount[1], treeData->mNodeCount[2], 1u},
7926  {treeData->mTileCount[0], treeData->mTileCount[1], treeData->mTileCount[2]},
7927  Codec::NONE, 0u, gridData->mVersion }; // FileMetaData
7928  os.write((const char*)&head, sizeof(FileHeader)); // write header
7929  os.write((const char*)&meta, sizeof(FileMetaData)); // write meta data
7930  os.write(gridName, nameSize); // write grid name
7931  }
7932  os.write((const char*)gridData, gridData->mGridSize);// write the grid
7933 }// writeUncompressedGrid
7934 
7935 /// @brief write multiple NanoVDB grids to a single file, without compression.
7936 /// @note To write all grids in a single GridHandle simply use handle.write("fieNane")
7937 template<typename GridHandleT, template<typename...> class VecT>
7938 void writeUncompressedGrids(const char* fileName, const VecT<GridHandleT>& handles, bool raw = false)
7939 {
7940 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ofstream or FILE implementations
7941  std::ofstream os(fileName, std::ios::out | std::ios::binary | std::ios::trunc);
7942 #else
7943  struct StreamT {
7944  FILE* fptr;
7945  StreamT(const char* name) { fptr = fopen(name, "wb"); }
7946  ~StreamT() { fclose(fptr); }
7947  void write(const char* data, size_t n) { fwrite(data, 1, n, fptr); }
7948  bool is_open() const { return fptr != NULL; }
7949  } os(fileName);
7950 #endif
7951  if (!os.is_open()) {
7952  fprintf(stderr, "nanovdb::writeUncompressedGrids: Unable to open file \"%s\"for output\n", fileName);
7953  exit(EXIT_FAILURE);
7954  }
7955  for (auto& h : handles) {
7956  for (uint32_t n=0; n<h.gridCount(); ++n) writeUncompressedGrid(os, h.gridData(n), raw);
7957  }
7958 } // writeUncompressedGrids
7959 
7960 /// @brief read all uncompressed grids from a stream and return their handles.
7961 ///
7962 /// @throw std::invalid_argument if stream does not contain a single uncompressed valid NanoVDB grid
7963 ///
7964 /// @details StreamT class must support: "bool read(char*, size_t)" and "void skip(uint32_t)"
7965 template<typename GridHandleT, typename StreamT, template<typename...> class VecT>
7966 VecT<GridHandleT> readUncompressedGrids(StreamT& is, const typename GridHandleT::BufferType& pool = typename GridHandleT::BufferType())
7967 {
7968  VecT<GridHandleT> handles;
7969  GridData data;
7970  is.read((char*)&data, sizeof(GridData));
7971  if (data.isValid()) {// stream contains a raw grid buffer
7972  uint64_t size = data.mGridSize, sum = 0u;
7973  while(data.mGridIndex + 1u < data.mGridCount) {
7974  is.skip(data.mGridSize - sizeof(GridData));// skip grid
7975  is.read((char*)&data, sizeof(GridData));// read sizeof(GridData) bytes
7976  sum += data.mGridSize;
7977  }
7978  is.skip(-int64_t(sum + sizeof(GridData)));// rewind to start
7979  auto buffer = GridHandleT::BufferType::create(size + sum, &pool);
7980  is.read((char*)(buffer.data()), buffer.size());
7981  handles.emplace_back(std::move(buffer));
7982  } else {// Header0, MetaData0, gridName0, Grid0...HeaderN, MetaDataN, gridNameN, GridN
7983  is.skip(-sizeof(GridData));// rewind
7984  FileHeader head;
7985  while(is.read((char*)&head, sizeof(FileHeader))) {
7986  if (!head.isValid()) {
7987  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid magic number = \"%s\"\n", (const char*)&(head.magic));
7988  exit(EXIT_FAILURE);
7989  } else if (!head.version.isCompatible()) {
7990  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid major version = \"%s\"\n", head.version.c_str());
7991  exit(EXIT_FAILURE);
7992  } else if (head.codec != Codec::NONE) {
7993  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid codec = \"%s\"\n", toStr(head.codec));
7994  exit(EXIT_FAILURE);
7995  }
7996  FileMetaData meta;
7997  for (uint16_t i = 0; i < head.gridCount; ++i) { // read all grids in segment
7998  is.read((char*)&meta, sizeof(FileMetaData));// read meta data
7999  is.skip(meta.nameSize); // skip grid name
8000  auto buffer = GridHandleT::BufferType::create(meta.gridSize, &pool);
8001  is.read((char*)buffer.data(), meta.gridSize);// read grid
8002  handles.emplace_back(std::move(buffer));
8003  }// loop over grids in segment
8004  }// loop over segments
8005  }
8006  return handles;
8007 } // readUncompressedGrids
8008 
8009 /// @brief Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
8010 template<typename GridHandleT, template<typename...> class VecT>
8011 VecT<GridHandleT> readUncompressedGrids(const char* fileName, const typename GridHandleT::BufferType& buffer = typename GridHandleT::BufferType())
8012 {
8013 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ifstream or FILE implementations
8014  struct StreamT : public std::ifstream {
8015  StreamT(const char* name) : std::ifstream(name, std::ios::in | std::ios::binary){}
8016  void skip(int64_t off) { this->seekg(off, std::ios_base::cur); }
8017  };
8018 #else
8019  struct StreamT {
8020  FILE* fptr;
8021  StreamT(const char* name) { fptr = fopen(name, "rb"); }
8022  ~StreamT() { fclose(fptr); }
8023  bool read(char* data, size_t n) {
8024  size_t m = fread(data, 1, n, fptr);
8025  return n == m;
8026  }
8027  void skip(int64_t off) { fseek(fptr, (long int)off, SEEK_CUR); }
8028  bool is_open() const { return fptr != NULL; }
8029  };
8030 #endif
8031  StreamT is(fileName);
8032  if (!is.is_open()) {
8033  fprintf(stderr, "nanovdb::readUncompressedGrids: Unable to open file \"%s\"for input\n", fileName);
8034  exit(EXIT_FAILURE);
8035  }
8036  return readUncompressedGrids<GridHandleT, StreamT, VecT>(is, buffer);
8037 } // readUncompressedGrids
8038 
8039 #endif // if !defined(__CUDA_ARCH__) && !defined(__HIP__)
8040 
8041 } // namespace io
8042 
8043 // ----------------------------> Implementations of random access methods <--------------------------------------
8044 
8045 /// @brief Implements Tree::getValue(Coord), i.e. return the value associated with a specific coordinate @c ijk.
8046 /// @tparam BuildT Build type of the grid being called
8047 /// @details The value at a coordinate maps to the background, a tile value or a leaf value.
8048 template<typename BuildT>
8049 struct GetValue
8050 {
8051  __hostdev__ static auto get(const NanoRoot<BuildT>& root) { return root.mBackground; }
8052  __hostdev__ static auto get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.value; }
8053  __hostdev__ static auto get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
8054  __hostdev__ static auto get(const NanoLower<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
8055  __hostdev__ static auto get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.getValue(n); } // works with all build types
8056 }; // GetValue<BuildT>
8057 
8058 template<typename BuildT>
8059 struct SetValue
8060 {
8061  static_assert(!BuildTraits<BuildT>::is_special, "SetValue does not support special value types");
8063  __hostdev__ static auto set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
8064  __hostdev__ static auto set(typename NanoRoot<BuildT>::Tile& tile, const ValueT& v) { tile.value = v; }
8065  __hostdev__ static auto set(NanoUpper<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
8066  __hostdev__ static auto set(NanoLower<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
8067  __hostdev__ static auto set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
8068 }; // SetValue<BuildT>
8069 
8070 template<typename BuildT>
8071 struct SetVoxel
8072 {
8073  static_assert(!BuildTraits<BuildT>::is_special, "SetVoxel does not support special value types");
8075  __hostdev__ static auto set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
8076  __hostdev__ static auto set(typename NanoRoot<BuildT>::Tile&, const ValueT&) {} // no-op
8077  __hostdev__ static auto set(NanoUpper<BuildT>&, uint32_t, const ValueT&) {} // no-op
8078  __hostdev__ static auto set(NanoLower<BuildT>&, uint32_t, const ValueT&) {} // no-op
8079  __hostdev__ static auto set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
8080 }; // SetVoxel<BuildT>
8081 
8082 /// @brief Implements Tree::isActive(Coord)
8083 /// @tparam BuildT Build type of the grid being called
8084 template<typename BuildT>
8085 struct GetState
8086 {
8087  __hostdev__ static auto get(const NanoRoot<BuildT>&) { return false; }
8088  __hostdev__ static auto get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.state > 0; }
8089  __hostdev__ static auto get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
8090  __hostdev__ static auto get(const NanoLower<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
8091  __hostdev__ static auto get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.mValueMask.isOn(n); }
8092 }; // GetState<BuildT>
8093 
8094 /// @brief Implements Tree::getDim(Coord)
8095 /// @tparam BuildT Build type of the grid being called
8096 template<typename BuildT>
8097 struct GetDim
8098 {
8099  __hostdev__ static uint32_t get(const NanoRoot<BuildT>&) { return 0u; } // background
8100  __hostdev__ static uint32_t get(const typename NanoRoot<BuildT>::Tile&) { return 4096u; }
8101  __hostdev__ static uint32_t get(const NanoUpper<BuildT>&, uint32_t) { return 128u; }
8102  __hostdev__ static uint32_t get(const NanoLower<BuildT>&, uint32_t) { return 8u; }
8103  __hostdev__ static uint32_t get(const NanoLeaf<BuildT>&, uint32_t) { return 1u; }
8104 }; // GetDim<BuildT>
8105 
8106 /// @brief Return the pointer to the leaf node that contains Coord. Implements Tree::probeLeaf(Coord)
8107 /// @tparam BuildT Build type of the grid being called
8108 template<typename BuildT>
8109 struct GetLeaf
8110 {
8111  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoRoot<BuildT>&) { return nullptr; }
8112  __hostdev__ static const NanoLeaf<BuildT>* get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
8113  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
8114  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoLower<BuildT>&, uint32_t) { return nullptr; }
8115  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoLeaf<BuildT>& leaf, uint32_t) { return &leaf; }
8116 }; // GetLeaf<BuildT>
8117 
8118 /// @brief Return point to the lower internal node where Coord maps to one of its values, i.e. terminates
8119 /// @tparam BuildT Build type of the grid being called
8120 template<typename BuildT>
8121 struct GetLower
8122 {
8123  __hostdev__ static const NanoLower<BuildT>* get(const NanoRoot<BuildT>&) { return nullptr; }
8124  __hostdev__ static const NanoLower<BuildT>* get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
8125  __hostdev__ static const NanoLower<BuildT>* get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
8126  __hostdev__ static const NanoLower<BuildT>* get(const NanoLower<BuildT>& node, uint32_t) { return &node; }
8127  __hostdev__ static const NanoLower<BuildT>* get(const NanoLeaf<BuildT>&, uint32_t) { return nullptr; }
8128 }; // GetLower<BuildT>
8129 
8130 /// @brief Return point to the upper internal node where Coord maps to one of its values, i.e. terminates
8131 /// @tparam BuildT Build type of the grid being called
8132 template<typename BuildT>
8133 struct GetUpper
8134 {
8135  __hostdev__ static const NanoUpper<BuildT>* get(const NanoRoot<BuildT>&) { return nullptr; }
8136  __hostdev__ static const NanoUpper<BuildT>* get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
8137  __hostdev__ static const NanoUpper<BuildT>* get(const NanoUpper<BuildT>& node, uint32_t) { return &node; }
8138  __hostdev__ static const NanoUpper<BuildT>* get(const NanoLower<BuildT>& node, uint32_t) { return nullptr; }
8139  __hostdev__ static const NanoUpper<BuildT>* get(const NanoLeaf<BuildT>&, uint32_t) { return nullptr; }
8140 }; // GetUpper<BuildT>
8141 
8142 /// @brief Implements Tree::probeLeaf(Coord)
8143 /// @tparam BuildT Build type of the grid being called
8144 template<typename BuildT>
8145 struct ProbeValue
8146 {
8148  __hostdev__ static bool get(const NanoRoot<BuildT>& root, ValueT& v)
8149  {
8150  v = root.mBackground;
8151  return false;
8152  }
8153  __hostdev__ static bool get(const typename NanoRoot<BuildT>::Tile& tile, ValueT& v)
8154  {
8155  v = tile.value;
8156  return tile.state > 0u;
8157  }
8158  __hostdev__ static bool get(const NanoUpper<BuildT>& node, uint32_t n, ValueT& v)
8159  {
8160  v = node.mTable[n].value;
8161  return node.mValueMask.isOn(n);
8162  }
8163  __hostdev__ static bool get(const NanoLower<BuildT>& node, uint32_t n, ValueT& v)
8164  {
8165  v = node.mTable[n].value;
8166  return node.mValueMask.isOn(n);
8167  }
8168  __hostdev__ static bool get(const NanoLeaf<BuildT>& leaf, uint32_t n, ValueT& v)
8169  {
8170  v = leaf.getValue(n);
8171  return leaf.mValueMask.isOn(n);
8172  }
8173 }; // ProbeValue<BuildT>
8174 
8175 /// @brief Implements Tree::getNodeInfo(Coord)
8176 /// @tparam BuildT Build type of the grid being called
8177 template<typename BuildT>
8178 struct GetNodeInfo
8179 {
8182  struct NodeInfo
8183  {
8184  uint32_t level, dim;
8185  ValueType minimum, maximum;
8186  FloatType average, stdDevi;
8188  };
8189  __hostdev__ static NodeInfo get(const NanoRoot<BuildT>& root)
8190  {
8191  return NodeInfo{3u, NanoUpper<BuildT>::DIM, root.minimum(), root.maximum(), root.average(), root.stdDeviation(), root.bbox()};
8192  }
8193  __hostdev__ static NodeInfo get(const typename NanoRoot<BuildT>::Tile& tile)
8194  {
8195  return NodeInfo{3u, NanoUpper<BuildT>::DIM, tile.value, tile.value, static_cast<FloatType>(tile.value), 0, CoordBBox::createCube(tile.origin(), NanoUpper<BuildT>::DIM)};
8196  }
8197  __hostdev__ static NodeInfo get(const NanoUpper<BuildT>& node, uint32_t n)
8198  {
8199  return NodeInfo{2u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
8200  }
8201  __hostdev__ static NodeInfo get(const NanoLower<BuildT>& node, uint32_t n)
8202  {
8203  return NodeInfo{1u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
8204  }
8205  __hostdev__ static NodeInfo get(const NanoLeaf<BuildT>& leaf, uint32_t n)
8206  {
8207  return NodeInfo{0u, leaf.dim(), leaf.minimum(), leaf.maximum(), leaf.average(), leaf.stdDeviation(), leaf.bbox()};
8208  }
8209 }; // GetNodeInfo<BuildT>
8210 
8211 } // namespace nanovdb
8212 
8213 #endif // end of NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
#define NANOVDB_MAGIC_NUMBER
Definition: NanoVDB.h:126
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this internal node.
Definition: NanoVDB.h:5159
__hostdev__ BBox(BBox &other, const SplitT &)
Definition: NanoVDB.h:2413
__hostdev__ Iterator end() const
Definition: NanoVDB.h:2402
__hostdev__ Vec4 & operator*=(const T &s)
Definition: NanoVDB.h:1781
__hostdev__ bool isSequential() const
return true if the specified node type is layed out breadth-first in memory and has a fixed size...
Definition: NanoVDB.h:3834
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:3692
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this internal node and any of its child nodes...
Definition: NanoVDB.h:5179
static __hostdev__ uint32_t voxelCount()
Return the total number of voxels (e.g. values) encoded in this leaf node.
Definition: NanoVDB.h:6224
LeafData< BuildT, Coord, Mask, 3 > DataType
Definition: NanoVDB.h:6023
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:6736
static __hostdev__ Coord max()
Definition: NanoVDB.h:1320
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:5136
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:6563
typename RootT::ValueType ValueType
Definition: NanoVDB.h:6545
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:7420
__hostdev__ const Vec3T & min() const
Definition: NanoVDB.h:2222
auto data() FMT_NOEXCEPT-> T *
Definition: core.h:808
__hostdev__ const TreeType & tree() const
Return a const reference to the tree of the IndexGrid.
Definition: NanoVDB.h:7752
__hostdev__ void setValue(const CoordT &ijk, const ValueType &v)
Sets the value at the specified location and activate its state.
Definition: NanoVDB.h:6251
__hostdev__ bool isOff() const
Return true if none of the bits are set in this Mask.
Definition: NanoVDB.h:2980
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:7542
Trait use to remove reference, i.e. "&", qualifier from a type. Default implementation is just a pass...
Definition: NanoVDB.h:558
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args) const
Definition: NanoVDB.h:6820
__hostdev__ BBox< CoordT > bbox() const
Return the bounding box in index space of active values in this leaf node.
Definition: NanoVDB.h:6210
void writeUncompressedGrids(const char *fileName, const VecT< GridHandleT > &handles, bool raw=false)
write multiple NanoVDB grids to a single file, without compression.
Definition: NanoVDB.h:7938
A simple vector class with three components, similar to openvdb::math::Vec3.
Definition: NanoVDB.h:1279
static __hostdev__ uint64_t memUsage(uint32_t tableSize)
Return the expected memory footprint in bytes with the specified number of tiles. ...
Definition: NanoVDB.h:4609
typedef int(APIENTRYP RE_PFNGLXSWAPINTERVALSGIPROC)(int)
__hostdev__ int32_t z() const
Definition: NanoVDB.h:1314
__hostdev__ DenseIter(RootT *parent)
Definition: NanoVDB.h:4525
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:5094
__hostdev__ Vec4 operator/(const Vec4 &v) const
Definition: NanoVDB.h:1760
__hostdev__ bool isMaskOn(std::initializer_list< MaskT > list) const
return true if any of the masks in the list are on
Definition: NanoVDB.h:2776
typename T::ValueType ElementType
Definition: NanoVDB.h:1967
GridBlindDataClass
Blind-data Classes that are currently supported by NanoVDB.
Definition: NanoVDB.h:393
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:2855
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:3826
__hostdev__ uint64_t pointCount() const
Definition: NanoVDB.h:5974
__hostdev__ const NodeTrait< RootT, LEVEL >::type * getFirstNode() const
return a const pointer to the first node of the specified level
Definition: NanoVDB.h:4118
Vec4()=default
__hostdev__ Vec3T applyJacobianF(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:3220
GLenum GLuint GLenum GLsizei const GLchar * buf
Definition: glcorearb.h:2540
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:6905
__hostdev__ bool safeCast() const
return true if the RootData follows right after the TreeData. If so, this implies that it's safe to c...
Definition: NanoVDB.h:7528
__hostdev__ ChildIter(RootT *parent)
Definition: NanoVDB.h:4382
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel (regardless of state or location in the tree.) ...
Definition: NanoVDB.h:4030
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4621
__hostdev__ const BBox< Vec3d > & worldBBox() const
Definition: NanoVDB.h:7562
__hostdev__ Vec3T indexToWorldDirF(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:3788
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:3711
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:5755
__hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const
Return the index of the first blind data with specified semantic if found, otherwise -1...
Definition: NanoVDB.h:3897
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:7264
cvex test(vector P=0;int unbound=3;export float s=0;export vector Cf=0;)
Definition: test.vfl:11
__hostdev__ void setMask(MaskT mask, bool on)
Definition: NanoVDB.h:2764
__hostdev__ Vec4(T x)
Definition: NanoVDB.h:1717
__hostdev__ void extrema(ValueType &min, ValueType &max) const
Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:4149
typename DataType::Tile Tile
Definition: NanoVDB.h:4333
__hostdev__ GridClass mapToGridClass(GridClass defaultClass=GridClass::Unknown)
Maps from a templated build type to a GridClass enum.
Definition: NanoVDB.h:2091
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:7680
__hostdev__ Vec3< float > asVec3s() const
Return a single precision floating-point vector of this coordinate.
Definition: NanoVDB.h:1693
__hostdev__ Vec4(T x, T y, T z, T w)
Definition: NanoVDB.h:1721
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:5170
__hostdev__ Mask & operator-=(const Mask &other)
Bitwise difference.
Definition: NanoVDB.h:3069
#define NANOVDB_MAJOR_VERSION_NUMBER
Definition: NanoVDB.h:133
__hostdev__ LeafNodeType * getFirstLeaf()
Template specializations of getFirstNode.
Definition: NanoVDB.h:4124
__hostdev__ const T * asPointer() const
return a const raw constant pointer to array of three vector components
Definition: NanoVDB.h:1670
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:4027
__hostdev__ Rgba8()
Default ctor initializes all channels to zero.
Definition: NanoVDB.h:1867
__hostdev__ const TreeT & tree() const
Return a const reference to the tree.
Definition: NanoVDB.h:3740
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:4355
static __hostdev__ auto set(NanoRoot< BuildT > &, const ValueT &)
Definition: NanoVDB.h:8075
__hostdev__ BaseBBox & translate(const Vec3T &xyz)
Definition: NanoVDB.h:2224
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:5203
__hostdev__ ValueType min() const
Return the smallest vector component.
Definition: NanoVDB.h:1639
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:6599
__hostdev__ BaseIter(DataT *data=nullptr, uint32_t n=0)
Definition: NanoVDB.h:4346
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this root node...
Definition: NanoVDB.h:4606
typename DataType::ValueType ValueType
Definition: NanoVDB.h:6024
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:7277
const typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:3401
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:5047
__hostdev__ Vec3(const Coord &ijk)
Definition: NanoVDB.h:1538
__hostdev__ BaseBBox & intersect(const BaseBBox &bbox)
Intersect this bounding box with the given bounding box.
Definition: NanoVDB.h:2247
Bit-mask to encode active states and facilitate sequential iterators and a fast codec for I/O compres...
Definition: NanoVDB.h:2805
Signed (i, j, k) 32-bit integer coordinate class, similar to openvdb::math::Coord.
Definition: NanoVDB.h:1282
__hostdev__ uint32_t getMajor() const
Definition: NanoVDB.h:951
typename DataType::BuildType BuildType
Definition: NanoVDB.h:6026
__hostdev__ void setOff()
Set all bits off.
Definition: NanoVDB.h:3028
__hostdev__ Vec3T & max()
Definition: NanoVDB.h:2221
typename DataType::BuildT BuildType
Definition: NanoVDB.h:4961
Metafunction used to determine if the first template parameter is a specialization of the class templ...
Definition: NanoVDB.h:614
Trait used to transfer the const-ness of a reference type to another type.
Definition: NanoVDB.h:588
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:4623
__hostdev__ BlindDataT * getBlindData(uint32_t n)
Definition: NanoVDB.h:3884
__hostdev__ Mask()
Initialize all bits to zero.
Definition: NanoVDB.h:2911
__hostdev__ Vec4 & operator=(const Vec4T< T2 > &rhs)
Definition: NanoVDB.h:1739
gridName(grid.gridName())
Definition: IO.h:332
__hostdev__ bool operator==(const Vec3 &rhs) const
Definition: NanoVDB.h:1542
__hostdev__ auto set(const uint32_t n, ArgsT &&...args)
Definition: NanoVDB.h:6317
__hostdev__ Version version() const
Definition: NanoVDB.h:7572
Definition: ImathVec.h:32
OIIO_NAMESPACE_BEGIN typedef std::ifstream ifstream
Definition: filesystem.h:57
__hostdev__ const Vec3d & voxelSize() const
Return a vector of the axial voxel sizes.
Definition: NanoVDB.h:7755
const typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:3445
#define SEEK_CUR
Definition: zconf.h:181
Dummy type for a 8bit quantization of float point values.
Definition: NanoVDB.h:273
__hostdev__ bool isInside(const CoordT &p) const
Definition: NanoVDB.h:2449
__hostdev__ ChannelT & operator()(const Coord &ijk) const
Definition: NanoVDB.h:7779
__hostdev__ bool isCached1(const CoordType &ijk) const
Definition: NanoVDB.h:6955
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:7197
__hostdev__ const NanoGrid< IndexT > & grid() const
Return a const reference to the IndexGrid.
Definition: NanoVDB.h:7749
__hostdev__ CoordT Round(const Vec3T< RealT > &xyz)
__hostdev__ const char * shortGridName() const
Definition: NanoVDB.h:7560
typename DataType::StatsT FloatType
Definition: NanoVDB.h:4960
double mMatD[9]
Definition: NanoVDB.h:3145
__hostdev__ Coord round() const
Definition: NanoVDB.h:1503
void
Definition: png.h:1083
GridType
List of types that are currently supported by NanoVDB.
Definition: NanoVDB.h:294
__hostdev__ uint64_t checksum() const
Return checksum of the grid buffer.
Definition: NanoVDB.h:3851
Struct to derive node type from its level in a given grid, tree or root while preserving constness...
Definition: NanoVDB.h:3386
__hostdev__ Vec4 & maxComponent(const Vec4 &other)
Perform a component-wise maximum with the other Coord.
Definition: NanoVDB.h:1806
IMATH_HOSTDEVICE constexpr int floor(T x) IMATH_NOEXCEPT
Definition: ImathFun.h:112
typename GridT::TreeType Type
Definition: NanoVDB.h:3962
__hostdev__ uint32_t id() const
Definition: NanoVDB.h:950
typename DataType::FloatType FloatType
Definition: NanoVDB.h:6025
GLboolean * data
Definition: glcorearb.h:131
Trait use to remove pointer, i.e. "*", qualifier from a type. Default implementation is just a pass-t...
Definition: NanoVDB.h:572
void skip(T &in, int n)
Definition: ImfXdr.h:613
__hostdev__ ValueIterator & operator++()
Definition: NanoVDB.h:6135
__hostdev__ void setOff()
Definition: NanoVDB.h:2728
GridClass
Classes (superset of OpenVDB) that are currently supported by NanoVDB.
Definition: NanoVDB.h:339
const GLdouble * v
Definition: glcorearb.h:837
__hostdev__ Coord offsetBy(ValueType n) const
Definition: NanoVDB.h:1468
auto printf(const S &fmt, const T &...args) -> int
Definition: printf.h:626
__hostdev__ Vec3T matMultT(const float *mat, const Vec3T &xyz)
Multiply the transposed of a 3x3 matrix and a 3d vector using 32bit floating point arithmetics...
Definition: NanoVDB.h:2172
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:4389
GridFlags
Grid flags which indicate what extra information is present in the grid buffer.
Definition: NanoVDB.h:364
UT_StringArray JOINTS head
__hostdev__ DenseIter operator++(int)
Definition: NanoVDB.h:4553
int32_t ValueType
Definition: NanoVDB.h:1286
RootT RootNodeType
Definition: NanoVDB.h:3986
void set(const Mat4T &mat, const Mat4T &invMat, double taper=1.0)
Initialize the member data from 4x4 matrices.
Definition: NanoVDB.h:3183
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findNext(uint32_t start) const
Definition: NanoVDB.h:3100
__hostdev__ void setBitOn(std::initializer_list< uint8_t > list)
Definition: NanoVDB.h:2733
#define NANOVDB_PATCH_VERSION_NUMBER
Definition: NanoVDB.h:135
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5207
__hostdev__ DataType * data()
Definition: NanoVDB.h:3709
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:3228
__hostdev__ const T & operator[](int i) const
Definition: NanoVDB.h:1553
__hostdev__ Coord(ValueType i, ValueType j, ValueType k)
Initializes coordinate to the given signed integers.
Definition: NanoVDB.h:1302
__hostdev__ Version version() const
Definition: NanoVDB.h:3707
Node caching at all (three) tree levels.
Definition: NanoVDB.h:7152
GLuint start
Definition: glcorearb.h:475
GLsizei const GLfloat * value
Definition: glcorearb.h:824
std::ofstream ofstream
Definition: filesystem.h:58
__hostdev__ ValueType getFirstValue() const
If the first entry in this node's table is a tile, return the tile's value. Otherwise, return the result of calling getFirstValue() on the child.
Definition: NanoVDB.h:5189
TreeData DataType
Definition: NanoVDB.h:3984
__hostdev__ ValueIterator(const LeafNode *parent)
Definition: NanoVDB.h:6112
__hostdev__ Vec3< T2 > operator/(T1 scalar, const Vec3< T2 > &vec)
Definition: NanoVDB.h:1679
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Return the total number of active tiles at the specified level of the tree.
Definition: NanoVDB.h:4059
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:6744
Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
Definition: NanoVDB.h:6013
Return the pointer to the leaf node that contains Coord. Implements Tree::probeLeaf(Coord) ...
Definition: NanoVDB.h:3461
__hostdev__ void localToGlobalCoord(Coord &ijk) const
Converts (in place) a local index coordinate to a global index coordinate.
Definition: NanoVDB.h:6199
__hostdev__ bool isActive() const
Definition: NanoVDB.h:6129
const GLuint GLenum const void * binary
Definition: glcorearb.h:1924
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findFirst() const
Definition: NanoVDB.h:3089
__hostdev__ bool is_divisible() const
Definition: NanoVDB.h:2432
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:2882
__hostdev__ uint32_t gridCount() const
Return total number of grids in the buffer.
Definition: NanoVDB.h:3723
float mInvMatF[9]
Definition: NanoVDB.h:3142
__hostdev__ ValueOnIter(RootT *parent)
Definition: NanoVDB.h:4480
typename remove_const< T >::type type
Definition: NanoVDB.h:590
__hostdev__ void set(bool on)
Set all bits off.
Definition: NanoVDB.h:3035
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:3693
static constexpr bool is_special
Definition: NanoVDB.h:462
__hostdev__ bool operator!=(const Coord &rhs) const
Definition: NanoVDB.h:1376
typename RootT::CoordType CoordType
Definition: NanoVDB.h:6546
RootT Node3
Definition: NanoVDB.h:3995
__hostdev__ const NodeTrait< RootT, 1 >::type * getFirstLower() const
Definition: NanoVDB.h:4127
uint8_t c[4]
Definition: NanoVDB.h:1844
vfloat4 sqrt(const vfloat4 &a)
Definition: simd.h:7481
GLdouble GLdouble GLdouble z
Definition: glcorearb.h:848
__hostdev__ BaseBBox & expand(const BaseBBox &bbox)
Expand this bounding box to enclose the given bounding box.
Definition: NanoVDB.h:2239
__hostdev__ bool isInside(const Vec3T &p) const
Definition: NanoVDB.h:2321
__hostdev__ BaseBBox & expand(const Vec3T &xyz)
Expand this bounding box to enclose point xyz.
Definition: NanoVDB.h:2231
GLboolean GLboolean g
Definition: glcorearb.h:1222
__hostdev__ ValueType getValue(uint32_t offset) const
Return the voxel value at the given offset.
Definition: NanoVDB.h:6238
const typename GridT::TreeType type
Definition: NanoVDB.h:3969
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Return true if this tree is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:4040
const typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:3444
__hostdev__ bool operator==(const Coord &rhs) const
Definition: NanoVDB.h:1375
CoordT mBBoxMin
Definition: NanoVDB.h:5735
defines a tree type from a grid type while preserving constness
Definition: NanoVDB.h:3960
__hostdev__ bool operator<=(const Iterator &rhs) const
Definition: NanoVDB.h:2392
static constexpr bool is_onindex
Definition: NanoVDB.h:452
__hostdev__ uint32_t nodeCount() const
Definition: NanoVDB.h:4066
__hostdev__ int32_t y() const
Definition: NanoVDB.h:1313
GLint level
Definition: glcorearb.h:108
__hostdev__ bool hasOverlap(const BBox &b) const
Return true if the given bounding box overlaps with this bounding box.
Definition: NanoVDB.h:2457
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5204
const typename GridT::TreeType Type
Definition: NanoVDB.h:3968
Dummy type for a voxel whose value equals an offset into an external value array of active values...
Definition: NanoVDB.h:255
Like ValueOnIndex but with a mutable mask.
Definition: NanoVDB.h:261
Highest level of the data structure. Contains a tree and a world->index transform (that currently onl...
Definition: NanoVDB.h:3685
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:5748
__hostdev__ uint32_t getPatch() const
Definition: NanoVDB.h:953
__hostdev__ uint64_t first(uint32_t i) const
Definition: NanoVDB.h:5975
Data encoded at the head of each segment of a file or stream.
Definition: NanoVDB.h:7835
__hostdev__ const CoordT & operator*() const
Definition: NanoVDB.h:2399
__hostdev__ T length() const
Definition: NanoVDB.h:1568
__hostdev__ Vec4 operator-() const
Definition: NanoVDB.h:1758
__hostdev__ Vec3 & minComponent(const Vec3 &other)
Perform a component-wise minimum with the other Coord.
Definition: NanoVDB.h:1616
GLboolean GLboolean GLboolean GLboolean a
Definition: glcorearb.h:1222
__hostdev__ const BBox< Coord > & indexBBox() const
Definition: NanoVDB.h:7563
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:6804
GLdouble s
Definition: glad.h:3009
__hostdev__ OffIterator beginOff() const
Definition: NanoVDB.h:2906
__hostdev__ bool isIndex(GridType gridType)
Return true if the GridType maps to a special index type (not a POD integer type).
Definition: NanoVDB.h:832
static __hostdev__ BBox createCube(const CoordT &min, typename CoordT::ValueType dim)
Definition: NanoVDB.h:2422
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:4619
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:3819
static __hostdev__ double value()
Definition: NanoVDB.h:1025
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:6052
__hostdev__ Mask & operator|=(const Mask &other)
Bitwise union.
Definition: NanoVDB.h:3060
GLuint GLsizei GLsizei * length
Definition: glcorearb.h:795
__hostdev__ void setMax(const bool &)
Definition: NanoVDB.h:5757
__hostdev__ Coord & minComponent(const Coord &other)
Perform a component-wise minimum with the other Coord.
Definition: NanoVDB.h:1424
__hostdev__ void setMaskOff(std::initializer_list< MaskT > list)
Definition: NanoVDB.h:2756
__hostdev__ Coord operator>>(IndexType n) const
Definition: NanoVDB.h:1352
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:6717
const char * c_str() const
returns a c-string of the semantic version, i.e. major.minor.patch
Definition: NanoVDB.h:962
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:5150
typename GridT::TreeType type
Definition: NanoVDB.h:3963
__hostdev__ bool isMaskOff(MaskT mask) const
Definition: NanoVDB.h:2773
Implements Tree::probeLeaf(Coord)
Definition: NanoVDB.h:3463
ImageBuf OIIO_API min(Image_or_Const A, Image_or_Const B, ROI roi={}, int nthreads=0)
Vec3T mCoord[2]
Definition: NanoVDB.h:2215
Maps one type (e.g. the build types above) to other (actual) types.
Definition: NanoVDB.h:628
__hostdev__ uint64_t getIndex(const Coord &ijk) const
Return the linear offset into a channel that maps to the specified coordinate.
Definition: NanoVDB.h:7774
__hostdev__ void setOff(uint32_t n)
Set the specified bit off.
Definition: NanoVDB.h:2991
**But if you need a or simply need to know when the task has note that the like this
Definition: thread.h:617
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:4008
static __hostdev__ auto set(NanoLeaf< BuildT > &leaf, uint32_t n, const ValueT &v)
Definition: NanoVDB.h:8067
__hostdev__ BitFlags & operator=(Type n)
required for backwards compatibility
Definition: NanoVDB.h:2793
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:6970
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:5015
static constexpr uint32_t WORD_COUNT
Definition: NanoVDB.h:2809
MaskT< LOG2DIM > mValues
Definition: NanoVDB.h:5739
GLint y
Definition: glcorearb.h:103
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:6119
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:5924
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:7553
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:3817
__hostdev__ T Sign(const T &x)
Return the sign of the given value as an integer (either -1, 0 or 1).
Definition: NanoVDB.h:1226
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:5126
__hostdev__ void initBit(std::initializer_list< uint8_t > list)
Definition: NanoVDB.h:2710
__hostdev__ int age() const
Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER.
Definition: NanoVDB.h:958
uint8_t mBBoxDif[3]
Definition: NanoVDB.h:5736
typename BuildT::BuildType BuildType
Definition: NanoVDB.h:3696
decltype(mFlags) Type
Definition: NanoVDB.h:2695
__hostdev__ const uint64_t * words() const
Definition: NanoVDB.h:2932
Trait use to const from type. Default implementation is just a pass-through.
Definition: NanoVDB.h:538
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:5052
__hostdev__ const T & operator[](int i) const
Definition: NanoVDB.h:1749
static __hostdev__ uint32_t wordCount()
Return the number of machine words used by this Mask.
Definition: NanoVDB.h:2818
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:3239
__hostdev__ ValueOnIter & operator++()
Definition: NanoVDB.h:4492
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4620
__hostdev__ void setValue(uint32_t offset, bool v)
Definition: NanoVDB.h:5750
__hostdev__ const NanoGrid< BuildT > & grid() const
Definition: NanoVDB.h:7599
static __hostdev__ bool lessThan(const Coord &a, const Coord &b)
Definition: NanoVDB.h:1472
static const int SIZE
Definition: NanoVDB.h:1713
__hostdev__ void setOn(uint32_t n)
Set the specified bit on.
Definition: NanoVDB.h:2989
static constexpr bool is_indexmask
Definition: NanoVDB.h:454
Visits all tile values in this node, i.e. both inactive and active tiles.
Definition: NanoVDB.h:5025
Visits all inactive values in a leaf node.
Definition: NanoVDB.h:6068
Return point to the lower internal node where Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:8121
__hostdev__ const BBoxType & bbox() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:4581
__hostdev__ DenseIterator(uint32_t pos=Mask::SIZE)
Definition: NanoVDB.h:2877
GLdouble GLdouble GLdouble q
Definition: glad.h:2445
__hostdev__ DenseIterator(const InternalNode *parent)
Definition: NanoVDB.h:5109
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:7550
__hostdev__ bool isOn() const
Definition: NanoVDB.h:2766
__hostdev__ const ValueType & operator[](IndexType i) const
Return a const reference to the given Coord component.
Definition: NanoVDB.h:1328
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:4418
__hostdev__ Vec3 operator/(const Vec3 &v) const
Definition: NanoVDB.h:1571
static ElementType scalar(const T &v)
Definition: NanoVDB.h:1968
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:6978
OIIO_FORCEINLINE vbool4 insert(const vbool4 &a, bool val)
Helper: substitute val for a[i].
Definition: simd.h:3436
__hostdev__ bool isValid(const GridBlindDataClass &blindClass, const GridBlindDataSemantic &blindSemantics, const GridType &blindType)
return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid...
Definition: NanoVDB.h:883
__hostdev__ Vec4 & normalize()
Definition: NanoVDB.h:1790
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:3825
double mInvMatD[9]
Definition: NanoVDB.h:3146
__hostdev__ Vec3T matMult(const float *mat, const Vec3T &xyz)
Multiply a 3x3 matrix and a 3d vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:2114
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:7208
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:5173
Codec
Define compression codecs.
Definition: NanoVDB.h:7826
typename match_const< DataType, RootT >::type DataT
Definition: NanoVDB.h:4342
Dummy type for a variable bit quantization of floating point values.
Definition: NanoVDB.h:279
__hostdev__ Iterator(uint32_t pos, const Mask *parent)
Definition: NanoVDB.h:2847
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:7545
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:3271
Trait used to identify template parameter that are pointers.
Definition: NanoVDB.h:511
__hostdev__ void setValueOnly(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:5978
static __hostdev__ bool safeCast(const GridData *gridData)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:7532
const typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:3400
__hostdev__ const Vec3T & operator[](int i) const
Definition: NanoVDB.h:2218
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:6980
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this internal ...
Definition: NanoVDB.h:5182
#define NANOVDB_HOSTDEV_DISABLE_WARNING
Definition: NanoVDB.h:234
GLuint buffer
Definition: glcorearb.h:660
static const int SIZE
Definition: NanoVDB.h:1515
__hostdev__ bool getDev() const
Definition: NanoVDB.h:5749
__hostdev__ void setDev(const bool &)
Definition: NanoVDB.h:5759
typename RootT::CoordType CoordType
Definition: NanoVDB.h:3992
__hostdev__ uint32_t nodeCount(int level) const
Definition: NanoVDB.h:4072
__hostdev__ Vec4(const Vec4< T2 > &v)
Definition: NanoVDB.h:1726
static __hostdev__ uint32_t dim()
Definition: NanoVDB.h:6020
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4438
__hostdev__ DenseIterator & operator++()
Definition: NanoVDB.h:2885
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t CountOn(uint64_t v)
Definition: NanoVDB.h:2643
static __hostdev__ BBox createCube(const Coord &min, typename Coord::ValueType dim)
Definition: NanoVDB.h:2305
#define NANOVDB_ASSERT(x)
Definition: NanoVDB.h:190
__hostdev__ const NodeT * getNode() const
Return a const point to the cached node of the specified type.
Definition: NanoVDB.h:7219
__hostdev__ Mask & operator&=(const Mask &other)
Bitwise intersection.
Definition: NanoVDB.h:3051
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:6751
FMT_NOINLINE FMT_CONSTEXPR auto fill(OutputIt it, size_t n, const fill_t< Char > &fill) -> OutputIt
Definition: format.h:1262
__hostdev__ float Clamp(float x, float a, float b)
Definition: NanoVDB.h:1112
__hostdev__ BBox(const BaseBBox< Coord > &bbox)
Definition: NanoVDB.h:2310
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:6729
__hostdev__ ValueType getLastValue() const
Return the last value in this leaf node.
Definition: NanoVDB.h:6246
__hostdev__ OnIterator beginOn() const
Definition: NanoVDB.h:2904
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:7273
__hostdev__ const Map & map() const
Return a const reference to the Map for this grid.
Definition: NanoVDB.h:3752
__hostdev__ Vec3d getVoxelSize() const
Return a voxels size in each coordinate direction, measured at the origin.
Definition: NanoVDB.h:3274
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:3828
__hostdev__ uint64_t memUsage() const
Return the actual memory footprint of this root node.
Definition: NanoVDB.h:4612
__hostdev__ uint32_t hash() const
Return a hash key derived from the existing coordinates.
Definition: NanoVDB.h:1488
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:3987
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:5578
__hostdev__ ConstValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:4512
__hostdev__ ValueType maximum() const
Return a const reference to the maximum active value encoded in this leaf node.
Definition: NanoVDB.h:6172
__hostdev__ void setValueOnly(const CoordT &ijk, const ValueType &v)
Definition: NanoVDB.h:6257
Define static boolean tests for template build types.
Definition: NanoVDB.h:448
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:4363
__hostdev__ void setBitOff(uint8_t bit)
Definition: NanoVDB.h:2731
__hostdev__ uint32_t totalNodeCount() const
Definition: NanoVDB.h:4078
__hostdev__ Vec3T worldToIndexF(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:3779
ReadAccessor< ValueT, LEVEL0, LEVEL1, LEVEL2 > createAccessor(const NanoGrid< ValueT > &grid)
Free-standing function for convenient creation of a ReadAccessor with optional and customizable node ...
Definition: NanoVDB.h:7474
__hostdev__ Vec4 operator/(const T &s) const
Definition: NanoVDB.h:1764
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:6641
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:6166
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:7235
__hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:5271
typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:3437
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:3815
__hostdev__ bool operator==(const Mask &other) const
Definition: NanoVDB.h:2953
Iterator & operator=(const Iterator &)=default
Visits child nodes of this node only.
Definition: NanoVDB.h:4981
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:7691
#define NANOVDB_DATA_ALIGNMENT
Definition: NanoVDB.h:154
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:6057
__hostdev__ bool operator<(const Iterator &rhs) const
Definition: NanoVDB.h:2387
__hostdev__ Vec3 operator+(const Vec3 &v) const
Definition: NanoVDB.h:1572
__hostdev__ ConstValueIterator cbeginValueAll() const
Definition: NanoVDB.h:4468
static T scalar(const T &s)
Definition: NanoVDB.h:1957
__hostdev__ ValueType minimum() const
Return a const reference to the minimum active value encoded in this leaf node.
Definition: NanoVDB.h:6169
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:7281
__hostdev__ const uint32_t & getTableSize() const
Definition: NanoVDB.h:4591
__hostdev__ NodeTrait< RootT, LEVEL >::type * getFirstNode()
return a pointer to the first node at the specified level
Definition: NanoVDB.h:4108
__hostdev__ int32_t x() const
Definition: NanoVDB.h:1312
Vec3()=default
__hostdev__ Iterator()
Definition: NanoVDB.h:2842
bool operator==(const BaseDimensions< T > &a, const BaseDimensions< Y > &b)
Definition: Dimensions.h:137
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:3816
typename NanoLeaf< BuildT >::ValueType ValueType
Definition: NanoVDB.h:8180
void set(const MatT &mat, const MatT &invMat, const Vec3T &translate, double taper=1.0)
Initialize the member data from 3x3 or 4x4 matrices.
Definition: NanoVDB.h:3278
Internal nodes of a VDB treedim(),.
Definition: NanoVDB.h:4955
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:7283
GA_API const UT_StringHolder scale
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:3988
LeafData()=delete
This class cannot be constructed or deleted.
__hostdev__ ValueIterator beginValue()
Definition: NanoVDB.h:4467
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:3416
__hostdev__ BBox(const Coord &min, const Coord &max)
Definition: NanoVDB.h:2300
__hostdev__ void set(uint32_t n, bool on)
Set the specified bit on or off.
Definition: NanoVDB.h:3008
GLdouble n
Definition: glcorearb.h:2008
float mMatF[9]
Definition: NanoVDB.h:3141
__hostdev__ int32_t & z()
Definition: NanoVDB.h:1318
Class to access values in channels at a specific voxel location.
Definition: NanoVDB.h:7710
PointAccessor(const NanoGrid< BuildT > &grid)
Definition: NanoVDB.h:7586
static __hostdev__ uint32_t dim()
Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!) ...
Definition: NanoVDB.h:6207
__hostdev__ int findBlindData(const char *name) const
Return the index of the first blind data with specified name if found, otherwise -1.
Definition: NanoVDB.h:3907
__hostdev__ uint64_t offset() const
Definition: NanoVDB.h:5973
GLfloat f
Definition: glcorearb.h:1926
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:6922
__hostdev__ void toggle()
brief Toggle the state of all bits in the mask
Definition: NanoVDB.h:3042
__hostdev__ void toggle(uint32_t n)
Definition: NanoVDB.h:3048
uint32_t packed
Definition: NanoVDB.h:1845
__hostdev__ DenseIterator beginDense()
Definition: NanoVDB.h:4564
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:3691
__hostdev__ Vec3T indexToWorldDir(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:3765
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:6974
__hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:6201
__hostdev__ uint32_t gridCount() const
Definition: NanoVDB.h:7559
__hostdev__ Vec3 cross(const Vec3T &v) const
Definition: NanoVDB.h:1558
__hostdev__ Type Min(Type a, Type b)
Definition: NanoVDB.h:1070
__hostdev__ Vec4 & operator/=(const T &s)
Definition: NanoVDB.h:1789
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args) const
Definition: NanoVDB.h:6652
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this leaf node.
Definition: NanoVDB.h:6178
GridBlindDataSemantic
Blind-data Semantics that are currently understood by NanoVDB.
Definition: NanoVDB.h:401
GLintptr offset
Definition: glcorearb.h:665
BBox< Vec3d > worldBBox
Definition: NanoVDB.h:7865
BitFlags(std::initializer_list< uint8_t > list)
Definition: NanoVDB.h:2697
__hostdev__ uint32_t nodeCount(uint32_t level) const
Definition: NanoVDB.h:7568
__hostdev__ Coord(ValueType *ptr)
Definition: NanoVDB.h:1307
IMATH_HOSTDEVICE constexpr int trunc(T x) IMATH_NOEXCEPT
Definition: ImathFun.h:126
Definition: core.h:760
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:6596
__hostdev__ T * asPointer()
return a non-const raw constant pointer to array of three vector components
Definition: NanoVDB.h:1668
typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:3436
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:3422
GridMetaData(const NanoGrid< T > &grid)
Definition: NanoVDB.h:7506
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:6749
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:5091
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:6149
IMATH_NAMESPACE::V2f float
__hostdev__ Vec3 & operator+=(const Coord &ijk)
Definition: NanoVDB.h:1585
__hostdev__ bool isCached2(const CoordType &ijk) const
Definition: NanoVDB.h:6961
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:5927
__hostdev__ Vec3< double > asVec3d() const
Return a double precision floating-point vector of this coordinate.
Definition: NanoVDB.h:1699
__hostdev__ TileT * tile() const
Definition: NanoVDB.h:4357
static __hostdev__ auto set(NanoLower< BuildT > &node, uint32_t n, const ValueT &v)
Definition: NanoVDB.h:8066
__hostdev__ Vec3T indexToWorldGrad(const Vec3T &grad) const
transform the gradient from index space to world space.
Definition: NanoVDB.h:3775
__hostdev__ bool isOff(uint32_t n) const
Return true if the given bit is NOT set.
Definition: NanoVDB.h:2968
__hostdev__ bool operator!=(const Vec3 &rhs) const
Definition: NanoVDB.h:1543
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:5160
__hostdev__ Vec3 & operator-=(const Coord &ijk)
Definition: NanoVDB.h:1599
__hostdev__ enable_if<!is_same< MaskT, Mask >::value, Mask & >::type operator=(const MaskT &other)
Assignment operator that works with openvdb::util::NodeMask.
Definition: NanoVDB.h:2936
RootData< ChildT > DataType
Definition: NanoVDB.h:4319
__hostdev__ Vec3 operator*(const T &s) const
Definition: NanoVDB.h:1576
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:5926
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:3989
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:6575
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:4325
__hostdev__ ValueType getValue(const CoordT &ijk) const
Return the voxel value at the given coordinate.
Definition: NanoVDB.h:6241
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:6569
OIIO_UTIL_API FILE * fopen(string_view path, string_view mode)
Version of fopen that can handle UTF-8 paths even on Windows.
GA_API const UT_StringHolder trans
Visits active tile values of this node only.
Definition: NanoVDB.h:5064
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:6976
__hostdev__ void setMin(const bool &)
Definition: NanoVDB.h:5756
BBox< Coord > CoordBBox
Definition: NanoVDB.h:2516
__hostdev__ const LeafNodeType * getFirstLeaf() const
Definition: NanoVDB.h:4125
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:5010
Return point to the upper internal node where Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:8133
__hostdev__ bool isActive() const
Definition: NanoVDB.h:4443
__hostdev__ Mask(const Mask &other)
Copy constructor.
Definition: NanoVDB.h:2924
__hostdev__ CoordT RoundDown(const Vec3T< RealT > &xyz)
Definition: NanoVDB.h:1207
__hostdev__ Vec4 operator-(const Vec4 &v) const
Definition: NanoVDB.h:1762
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:6723
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:3824
__hostdev__ bool isActive() const
Return true if any of the voxel value are active in this leaf node.
Definition: NanoVDB.h:6264
#define NANOVDB_MAGIC_GRID
Definition: NanoVDB.h:127
__hostdev__ void next()
Definition: NanoVDB.h:4356
BitFlags(std::initializer_list< MaskT > list)
Definition: NanoVDB.h:2703
__hostdev__ ValueIter operator++(int)
Definition: NanoVDB.h:4456
InternalData< ChildT, Log2Dim > DataType
Definition: NanoVDB.h:4958
__hostdev__ Coord offsetBy(ValueType dx, ValueType dy, ValueType dz) const
Definition: NanoVDB.h:1463
__hostdev__ int32_t & y()
Definition: NanoVDB.h:1317
__hostdev__ uint64_t memUsage() const
return memory usage in bytes for the leaf node
Definition: NanoVDB.h:6229
__hostdev__ ConstDenseIterator cbeginChildAll() const
Definition: NanoVDB.h:4566
__hostdev__ Vec3T applyJacobian(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:3211
__hostdev__ DenseIterator beginAll() const
Definition: NanoVDB.h:2908
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:7613
__hostdev__ Iterator & operator++()
Definition: NanoVDB.h:2357
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:5086
__hostdev__ const char * shortGridName() const
Return a c-string with the name of this grid, truncated to 255 characters.
Definition: NanoVDB.h:3848
__hostdev__ void setBit(uint8_t bit, bool on)
Definition: NanoVDB.h:2762
__hostdev__ ValueType max() const
Return the largest vector component.
Definition: NanoVDB.h:1644
__hostdev__ Vec4 operator+(const Vec4 &v) const
Definition: NanoVDB.h:1761
static constexpr bool is_float
Definition: NanoVDB.h:460
__hostdev__ uint32_t countOn() const
Return the total number of set bits in this Mask.
Definition: NanoVDB.h:2821
__hostdev__ bool getMax() const
Definition: NanoVDB.h:5747
__hostdev__ ChannelT & operator()(int i, int j, int k) const
Definition: NanoVDB.h:7780
__hostdev__ DenseIterator operator++(int)
Definition: NanoVDB.h:2890
typename DataType::ValueT ValueType
Definition: NanoVDB.h:4959
typename BuildT::RootType RootType
Definition: NanoVDB.h:3689
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:5762
uint64_t mPointCount
Definition: NanoVDB.h:5960
__hostdev__ void localToGlobalCoord(Coord &ijk) const
modifies local coordinates to global coordinates of a tile or child node
Definition: NanoVDB.h:5265
static __hostdev__ BBox createCube(typename CoordT::ValueType min, typename CoordT::ValueType max)
Definition: NanoVDB.h:2427
__hostdev__ void setOn()
Set all bits on.
Definition: NanoVDB.h:3021
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:6591
GLuint GLuint end
Definition: glcorearb.h:475
static T value()
Definition: NanoVDB.h:1058
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:6064
double mVecD[3]
Definition: NanoVDB.h:3147
__hostdev__ uint64_t AlignUp(uint64_t byteCount)
round up byteSize to the nearest wordSize, e.g. to align to machine word: AlignUp<sizeof(size_t)(n) ...
Definition: NanoVDB.h:1269
__hostdev__ uint32_t rootTableSize() const
Definition: NanoVDB.h:7570
__hostdev__ Vec3 operator+(const Coord &ijk) const
Definition: NanoVDB.h:1574
Dummy type for indexing points into voxels.
Definition: NanoVDB.h:282
__hostdev__ Vec3T worldToIndexDir(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:3770
__hostdev__ bool isOn() const
Return true if all the bits are set in this Mask.
Definition: NanoVDB.h:2971
__hostdev__ T Pow3(T x)
Definition: NanoVDB.h:1155
__hostdev__ Vec3T applyMap(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:3194
Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation...
Definition: NanoVDB.h:3139
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:6886
#define NANOVDB_MAGIC_FILE
Definition: NanoVDB.h:128
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this internal node and an...
Definition: NanoVDB.h:5176
__hostdev__ T Pow2(T x)
Definition: NanoVDB.h:1149
__hostdev__ Vec3T applyMapF(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:3202
static __hostdev__ auto set(NanoUpper< BuildT > &node, uint32_t n, const ValueT &v)
Definition: NanoVDB.h:8065
__hostdev__ Vec3 & operator*=(const T &s)
Definition: NanoVDB.h:1606
GLint GLuint mask
Definition: glcorearb.h:124
__hostdev__ Version(uint32_t data)
Constructor from a raw uint32_t data representation.
Definition: NanoVDB.h:936
__hostdev__ ValueIterator(const InternalNode *parent)
Definition: NanoVDB.h:5036
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this root node and any of its child nodes...
Definition: NanoVDB.h:4603
__hostdev__ void setMaskOn(std::initializer_list< MaskT > list)
Definition: NanoVDB.h:2750
__hostdev__ uint64_t activeVoxelCount() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:4052
__hostdev__ Mask(bool on)
Definition: NanoVDB.h:2916
__hostdev__ Coord operator-(const Coord &rhs) const
Definition: NanoVDB.h:1406
VDB Tree, which is a thin wrapper around a RootNode.
Definition: NanoVDB.h:3976
__hostdev__ Vec3 operator-(const Coord &ijk) const
Definition: NanoVDB.h:1575
__hostdev__ bool operator==(const Iterator &rhs) const
Definition: NanoVDB.h:2377
GridMetaData(const GridData *gridData)
Definition: NanoVDB.h:7513
constexpr enabler dummy
An instance to use in EnableIf.
Definition: CLI11.h:985
__hostdev__ bool isBitOn(uint8_t bit) const
Definition: NanoVDB.h:2768
__hostdev__ bool operator!=(const Vec4 &rhs) const
Definition: NanoVDB.h:1737
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:3429
static __hostdev__ size_t memUsage()
Return memory usage in bytes for the class.
Definition: NanoVDB.h:5156
__hostdev__ Vec3 & operator/=(const T &s)
Definition: NanoVDB.h:1613
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:7071
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:7547
bool operator<(const GU_TetrahedronFacet &a, const GU_TetrahedronFacet &b)
__hostdev__ uint64_t gridSize() const
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:3717
__hostdev__ uint64_t activeVoxelCount() const
Definition: NanoVDB.h:7566
__hostdev__ void setMaskOff(MaskT mask)
Definition: NanoVDB.h:2747
__hostdev__ Iterator(const BBox &b)
Definition: NanoVDB.h:2347
__hostdev__ ChildIter(ParentT *parent)
Definition: NanoVDB.h:4994
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, uint32_t channelID=0u)
Ctor from an IndexGrid and an integer ID of an internal channel that is assumed to exist as blind dat...
Definition: NanoVDB.h:7725
__hostdev__ enable_if< is_same< T, Point >::value, const uint64_t & >::type pointCount() const
Return the total number of points indexed by this PointGrid.
Definition: NanoVDB.h:3737
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:7282
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:5061
static __hostdev__ auto set(NanoRoot< BuildT > &, const ValueT &)
Definition: NanoVDB.h:8063
VecT< GridHandleT > readUncompressedGrids(const char *fileName, const typename GridHandleT::BufferType &buffer=typename GridHandleT::BufferType())
Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
Definition: NanoVDB.h:8011
LeafFnBase< CoordT, MaskT, LOG2DIM > BaseT
Definition: NanoVDB.h:5564
bool FloatType
Definition: NanoVDB.h:5731
__hostdev__ uint64_t gridSize() const
Definition: NanoVDB.h:7557
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:4594
auto get(const UT_ARTIterator< T > &it) -> decltype(it.key())
Definition: UT_ARTMap.h:1073
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:7556
__hostdev__ uint64_t checksum() const
Definition: NanoVDB.h:7569
__hostdev__ const void * blindData(uint32_t n) const
Returns a const pointer to the blindData at the specified linear offset.
Definition: NanoVDB.h:3869
__hostdev__ Vec3d voxelSize() const
Definition: NanoVDB.h:7564
__hostdev__ Coord ceil() const
Round each component if this Vec<T> down to its integer value.
Definition: NanoVDB.h:1653
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:4597
#define __device__
Definition: NanoVDB.h:219
__hostdev__ Coord & operator+=(int n)
Definition: NanoVDB.h:1398
__hostdev__ uint32_t blindDataCount() const
Return true if this grid is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:3857
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:6085
typename Vec3T::ValueType ValueType
Definition: NanoVDB.h:2286
uint8_t mFlags
Definition: NanoVDB.h:5737
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:5131
__hostdev__ CoordT origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:6186
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:6124
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:5081
static __hostdev__ uint32_t CoordToOffset(const CoordType &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:5249
__hostdev__ DenseIterator cbeginChildAll() const
Definition: NanoVDB.h:5140
__hostdev__ uint8_t flags() const
Definition: NanoVDB.h:6183
__hostdev__ Type data() const
Definition: NanoVDB.h:2708
static constexpr bool is_offindex
Definition: NanoVDB.h:453
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args) const
Definition: NanoVDB.h:7402
__hostdev__ Vec4 operator*(const Vec4 &v) const
Definition: NanoVDB.h:1759
__hostdev__ DataType * data()
Definition: NanoVDB.h:5148
typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:3394
Implements Tree::getValue(Coord), i.e. return the value associated with a specific coordinate ijk...
Definition: NanoVDB.h:3451
__hostdev__ Iterator(const BBox &b, const Coord &p)
Definition: NanoVDB.h:2352
#define NANOVDB_MINOR_VERSION_NUMBER
Definition: NanoVDB.h:134
typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:3393
__hostdev__ bool isApproxZero(const Type &x)
Definition: NanoVDB.h:1064
GLuint const GLchar * name
Definition: glcorearb.h:786
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this root node and any of...
Definition: NanoVDB.h:4600
__hostdev__ uint32_t getMinor() const
Definition: NanoVDB.h:952
__hostdev__ bool empty() const
Return true if this bounding box is empty, e.g. uninitialized.
Definition: NanoVDB.h:2436
__hostdev__ bool isOn(uint32_t n) const
Return true if the given bit is set.
Definition: NanoVDB.h:2965
Maximum floating-point values.
Definition: NanoVDB.h:1032
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:6148
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:7625
__hostdev__ Map(double s, const Vec3d &t=Vec3d(0.0, 0.0, 0.0))
Definition: NanoVDB.h:3162
__hostdev__ ChannelT * setChannel(ChannelT *channelPtr)
Change to an external channel.
Definition: NanoVDB.h:7762
__hostdev__ int MinIndex(const Vec3T &v)
Definition: NanoVDB.h:1232
Like ValueIndex but with a mutable mask.
Definition: NanoVDB.h:258
__hostdev__ bool isActive(const CoordT &ijk) const
Return true if the voxel value at the given coordinate is active.
Definition: NanoVDB.h:6260
__hostdev__ NodeT * probeChild(ValueType &value) const
Definition: NanoVDB.h:4530
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:7203
__hostdev__ Vec3 operator-() const
Definition: NanoVDB.h:1569
uint64_t mPadding[2]
Definition: NanoVDB.h:5740
__hostdev__ Vec3 & operator-=(const Vec3 &v)
Definition: NanoVDB.h:1592
static __hostdev__ uint32_t CoordToOffset(const CoordT &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:6284
__hostdev__ bool isMask() const
Definition: NanoVDB.h:7549
__hostdev__ Type & data()
Definition: NanoVDB.h:2709
__hostdev__ Vec3T indexToWorldF(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:3783
__hostdev__ const NodeT * getFirstNode() const
return a const pointer to the first node of the specified type
Definition: NanoVDB.h:4097
void writeUncompressedGrid(StreamT &os, const GridData *gridData, bool raw=false)
This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also works if client code only includes PNanoVDB.h!
Definition: NanoVDB.h:7908
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:3260
static constexpr uint32_t SIZE
Definition: NanoVDB.h:2808
GLboolean GLboolean GLboolean b
Definition: glcorearb.h:1222
GLint GLenum GLint x
Definition: glcorearb.h:409
static constexpr bool is_Fp
Definition: NanoVDB.h:458
PointAccessor(const NanoGrid< Point > &grid)
Definition: NanoVDB.h:7649
static __hostdev__ double value()
Definition: NanoVDB.h:1009
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findPrev(uint32_t start) const
Definition: NanoVDB.h:3117
__hostdev__ BBox()
Default construction sets BBox to an empty bbox.
Definition: NanoVDB.h:2291
__hostdev__ NodeTrait< RootT, 1 >::type * getFirstLower()
Definition: NanoVDB.h:4126
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:6271
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:3827
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args)
Definition: NanoVDB.h:4697
static __hostdev__ uint32_t dim()
Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32) ...
Definition: NanoVDB.h:5153
__hostdev__ FloatType stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this leaf node...
Definition: NanoVDB.h:6181
float mTaperF
Definition: NanoVDB.h:3144
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:6979
__hostdev__ BBox< Vec3d > transform(const Map &map) const
transform this coordinate bounding box by the specified map
Definition: NanoVDB.h:2480
__hostdev__ bool isFloatingPoint(GridType gridType)
return true if the GridType maps to a floating point type
Definition: NanoVDB.h:794
__hostdev__ void setMaskOn(MaskT mask)
Definition: NanoVDB.h:2745
__hostdev__ ValueIter(RootT *parent)
Definition: NanoVDB.h:4431
__hostdev__ const MaskType< LOG2DIM > & getChildMask() const
Definition: NanoVDB.h:5164
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:3821
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:5738
__hostdev__ ValueIter & operator++()
Definition: NanoVDB.h:4448
auto fprintf(std::FILE *f, const S &fmt, const T &...args) -> int
Definition: printf.h:602
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:7280
GLdouble t
Definition: glad.h:2397
__hostdev__ bool probeValue(const Coord &ijk, typename remove_const< ChannelT >::type &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:7783
__hostdev__ FloatType average() const
Return a const reference to the average of all the active values encoded in this leaf node...
Definition: NanoVDB.h:6175
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:4358
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:5237
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:5867
Implements Tree::isActive(Coord)
Definition: NanoVDB.h:3457
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:7185
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:6977
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:8062
uint8_t ArrayType
Definition: NanoVDB.h:5566
__hostdev__ const MaskType< LOG2DIM > & childMask() const
Return a const reference to the bit mask of child nodes in this internal node.
Definition: NanoVDB.h:5163
__hostdev__ RootT & root()
Definition: NanoVDB.h:4013
Iterator< false > OffIterator
Definition: NanoVDB.h:2902
__hostdev__ const LeafNode * probeLeaf(const CoordT &) const
Definition: NanoVDB.h:6281
__hostdev__ ChannelT * setChannel(uint32_t channelID)
Change to an internal channel, assuming it exists as as blind data in the IndexGrid.
Definition: NanoVDB.h:7768
__hostdev__ uint64_t activeVoxelCount() const
Computes a AABB of active values in world space.
Definition: NanoVDB.h:3810
__hostdev__ Vec3T dim() const
Definition: NanoVDB.h:2320
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4020
static __hostdev__ float value()
Definition: NanoVDB.h:1004
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:6601
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:5242
__hostdev__ int MaxIndex(const Vec3T &v)
Definition: NanoVDB.h:1249
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:6703
static __hostdev__ CoordT OffsetToLocalCoord(uint32_t n)
Compute the local coordinates from a linear offset.
Definition: NanoVDB.h:6191
__hostdev__ void setBitOff(std::initializer_list< uint8_t > list)
Definition: NanoVDB.h:2738
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:7603
__hostdev__ Type Max(Type a, Type b)
Definition: NanoVDB.h:1091
__hostdev__ Coord & operator-=(const Coord &rhs)
Definition: NanoVDB.h:1415
__hostdev__ Vec3T worldToIndex(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:3756
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:3820
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:7555
__hostdev__ Coord & operator+=(const Coord &rhs)
Definition: NanoVDB.h:1408
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:4671
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:6752
__hostdev__ NodeT * getFirstNode()
return a pointer to the first node of the specified type
Definition: NanoVDB.h:4087
Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
Definition: NanoVDB.h:267
bool isValid(const NanoGrid< ValueT > &grid, bool detailed=true, bool verbose=false)
Return true if the specified grid passes several validation tests.
Dummy type for a voxel whose value equals an offset into an external value array. ...
Definition: NanoVDB.h:252
__hostdev__ uint64_t last(uint32_t i) const
Definition: NanoVDB.h:5976
GLint j
Definition: glad.h:2733
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:3814
Visits all tile values and child nodes of this node.
Definition: NanoVDB.h:5098
__hostdev__ Vec4 & operator-=(const Vec4 &v)
Definition: NanoVDB.h:1773
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4031
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, ChannelT *channelPtr)
Ctor from an IndexGrid and an external channel.
Definition: NanoVDB.h:7736
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:6600
Iterator< true > OnIterator
Definition: NanoVDB.h:2901
GLsizeiptr size
Definition: glcorearb.h:664
GLfloat GLfloat GLfloat GLfloat h
Definition: glcorearb.h:2002
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args)
Definition: NanoVDB.h:6311
GLenum GLenum dst
Definition: glcorearb.h:1793
IMATH_HOSTDEVICE constexpr int ceil(T x) IMATH_NOEXCEPT
Definition: ImathFun.h:119
OIIO_UTIL_API int fseek(FILE *file, int64_t offset, int whence)
Version of fseek that works with 64 bit offsets on all systems.
const char * toStr(GridType gridType)
Maps a GridType to a c-string.
Definition: NanoVDB.h:326
__hostdev__ bool isBitOff(uint8_t bit) const
Definition: NanoVDB.h:2769
__hostdev__ T dot(const Vec3T &v) const
Definition: NanoVDB.h:1556
Visits all values in a leaf node, i.e. both active and inactive values.
Definition: NanoVDB.h:6101
__hostdev__ bool isFloatingPointVector(GridType gridType)
return true if the GridType maps to a floating point vec3.
Definition: NanoVDB.h:808
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:4330
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:5042
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:4324
static __hostdev__ auto set(NanoLeaf< BuildT > &leaf, uint32_t n, const ValueT &v)
Definition: NanoVDB.h:8079
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Definition: NanoVDB.h:7567
__hostdev__ DataType * data()
Definition: NanoVDB.h:4576
__hostdev__ bool isInteger(GridType gridType)
Return true if the GridType maps to a POD integer type.
Definition: NanoVDB.h:820
__hostdev__ void setOn()
Definition: NanoVDB.h:2727
__hostdev__ Vec3(const Vec3< T2 > &v)
Definition: NanoVDB.h:1534
const typename remove_const< T >::type type
Definition: NanoVDB.h:601
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:7552
__hostdev__ Vec3< T2 > operator*(T1 scalar, const Vec3< T2 > &vec)
Definition: NanoVDB.h:1674
__hostdev__ Mask & operator^=(const Mask &other)
Bitwise XOR.
Definition: NanoVDB.h:3078
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:6750
__hostdev__ Type getFlags() const
Definition: NanoVDB.h:2725
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4487
__hostdev__ T lengthSqr() const
Definition: NanoVDB.h:1753
__hostdev__ BBox expandBy(typename CoordT::ValueType padding) const
Return a new instance that is expanded by the specified padding.
Definition: NanoVDB.h:2471
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:7544
__hostdev__ uint64_t * words()
Return a pointer to the list of words of the bit mask.
Definition: NanoVDB.h:2931
__hostdev__ Vec3 operator*(const Vec3 &v) const
Definition: NanoVDB.h:1570
__hostdev__ DenseIter & operator++()
Definition: NanoVDB.h:4547
GLenum GLsizei GLsizei GLint * values
Definition: glcorearb.h:1602
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:3251
__hostdev__ bool operator<=(const Version &rhs) const
Definition: NanoVDB.h:947
typename RootT::BuildType BuildType
Definition: NanoVDB.h:3991
__hostdev__ BBox(const CoordT &min, const CoordT &max)
Definition: NanoVDB.h:2407
__hostdev__ NodeTrait< RootT, 2 >::type * getFirstUpper()
Definition: NanoVDB.h:4128
__hostdev__ DataType * data()
Definition: NanoVDB.h:4006
__hostdev__ Rgba8(uint8_t r, uint8_t g, uint8_t b, uint8_t a=255u)
integer r,g,b,a ctor where alpha channel defaults to opaque
Definition: NanoVDB.h:1875
__hostdev__ Vec3T & min()
Definition: NanoVDB.h:2220
__hostdev__ bool isSequential() const
return true if nodes at all levels can safely be accessed with simple linear offsets ...
Definition: NanoVDB.h:3842
Dummy type for a 4bit quantization of float point values.
Definition: NanoVDB.h:270
__hostdev__ constexpr T pi()
Pi constant taken from Boost to match old behaviour.
Definition: NanoVDB.h:976
static __hostdev__ auto set(NanoUpper< BuildT > &, uint32_t, const ValueT &)
Definition: NanoVDB.h:8077
__hostdev__ ValueIterator operator++(int)
Definition: NanoVDB.h:6140
__hostdev__ uint64_t idx(int i, int j, int k) const
Definition: NanoVDB.h:7775
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:6597
__hostdev__ bool operator!=(const Iterator &rhs) const
Definition: NanoVDB.h:2382
typename DataType::BuildT BuildType
Definition: NanoVDB.h:4328
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:6090
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:3823
__hostdev__ const GridBlindMetaData & blindMetaData(uint32_t n) const
Definition: NanoVDB.h:3890
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:6595
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:3409
Class to access points at a specific voxel location.
Definition: NanoVDB.h:7579
__hostdev__ ChildIter operator++(int)
Definition: NanoVDB.h:4407
__hostdev__ ValueOnIterator(const InternalNode *parent)
Definition: NanoVDB.h:5075
__hostdev__ bool operator!=(const BaseBBox &rhs) const
Definition: NanoVDB.h:2217
typename RootT::ChildNodeType Node2
Definition: NanoVDB.h:3996
__hostdev__ Iterator operator++(int)
Definition: NanoVDB.h:2862
__hostdev__ BaseBBox(const Vec3T &min, const Vec3T &max)
Definition: NanoVDB.h:2269
__hostdev__ const ChildT * probeChild(ValueType &value) const
Definition: NanoVDB.h:5115
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:4964
__hostdev__ const uint32_t & tileCount() const
Return the number of tiles encoded in this root node.
Definition: NanoVDB.h:4590
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:5022
typename BuildToValueMap< BuildT >::Type ValueT
Definition: NanoVDB.h:8147
__hostdev__ bool operator>=(const Version &rhs) const
Definition: NanoVDB.h:949
uint32_t IndexType
Definition: NanoVDB.h:1287
__hostdev__ bool operator<(const Coord &rhs) const
Return true if this Coord is lexicographically less than the given Coord.
Definition: NanoVDB.h:1355
__hostdev__ bool getMin() const
Definition: NanoVDB.h:5746
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:6899
__hostdev__ const char * gridName() const
Return a c-string with the name of this grid.
Definition: NanoVDB.h:3845
LeafData & operator=(const LeafData &)=delete
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:3423
__hostdev__ bool isMaskOn(MaskT mask) const
Definition: NanoVDB.h:2771
__hostdev__ Iterator begin() const
Definition: NanoVDB.h:2401
__hostdev__ void initMask(std::initializer_list< MaskT > list)
Definition: NanoVDB.h:2717
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:7548
ChildT UpperNodeType
Definition: NanoVDB.h:4323
DenseIterator & operator=(const DenseIterator &)=default
GLuint index
Definition: glcorearb.h:786
__hostdev__ uint32_t countOn(uint32_t i) const
Return the number of lower set bits in mask up to but excluding the i'th bit.
Definition: NanoVDB.h:2830
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:7670
Tolerance for floating-point comparison.
Definition: NanoVDB.h:1000
typename DataType::StatsT FloatType
Definition: NanoVDB.h:4327
This is a convenient class that allows for access to grid meta-data that are independent of the value...
Definition: NanoVDB.h:7497
~LeafData()=delete
__hostdev__ Coord operator+(const Coord &rhs) const
Definition: NanoVDB.h:1405
Dummy type for a voxel whose value equals its binary active state.
Definition: NanoVDB.h:264
__hostdev__ Vec3T & operator[](int i)
Definition: NanoVDB.h:2219
__hostdev__ Vec3(const Vec3T< T2 > &v)
Definition: NanoVDB.h:1528
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:5060
__hostdev__ void setBitOn(uint8_t bit)
Definition: NanoVDB.h:2730
static __hostdev__ size_t memUsage()
Return the memory footprint in bytes of this Mask.
Definition: NanoVDB.h:2812
typename Mask< Log2Dim >::template Iterator< On > MaskIterT
Definition: NanoVDB.h:4969
__hostdev__ int32_t Ceil(float x)
Definition: NanoVDB.h:1139
Delta for small floating-point offsets.
Definition: NanoVDB.h:1016
__hostdev__ BBox(const Vec3T &min, const Vec3T &max)
Definition: NanoVDB.h:2296
Top-most node of the VDB tree structure.
Definition: NanoVDB.h:4316
__hostdev__ int32_t Floor(float x)
Definition: NanoVDB.h:1130
RootT RootType
Definition: NanoVDB.h:3985
__hostdev__ BaseBBox()
Definition: NanoVDB.h:2268
auto ptr(T p) -> const void *
Definition: format.h:2448
ImageBuf OIIO_API max(Image_or_Const A, Image_or_Const B, ROI roi={}, int nthreads=0)
uint64_t mOffset
Definition: NanoVDB.h:5959
__hostdev__ ValueOnIterator(const LeafNode *parent)
Definition: NanoVDB.h:6046
__hostdev__ const ValueType & background() const
Return the total number of active voxels in the root and all its child nodes.
Definition: NanoVDB.h:4587
__hostdev__ bool isMaskOff(std::initializer_list< MaskT > list) const
return true if any of the masks in the list are off
Definition: NanoVDB.h:2785
__hostdev__ const Vec3d & voxelSize() const
Return a const reference to the size of a voxel in world units.
Definition: NanoVDB.h:3749
__hostdev__ Vec3(T x)
Definition: NanoVDB.h:1519
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args)
Definition: NanoVDB.h:5293
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:4962
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:3408
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:7278
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:7554
static __hostdev__ bool safeCast(const NanoGrid< T > &grid)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:7539
__hostdev__ TreeT & tree()
Return a non-const reference to the tree.
Definition: NanoVDB.h:3743
static __hostdev__ auto set(typename NanoRoot< BuildT >::Tile &tile, const ValueT &v)
Definition: NanoVDB.h:8064
typename match_const< Tile, RootT >::type TileT
Definition: NanoVDB.h:4343
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:5021
__hostdev__ CoordType origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:5167
__hostdev__ T dot(const Vec4T &v) const
Definition: NanoVDB.h:1752
**If you just want to fire and args
Definition: thread.h:609
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arit...
Definition: NanoVDB.h:3269
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:6065
A simple vector class with four components, similar to openvdb::math::Vec4.
Definition: NanoVDB.h:1708
__hostdev__ ChildIter & operator++()
Definition: NanoVDB.h:4399
__hostdev__ Coord & maxComponent(const Coord &other)
Perform a component-wise maximum with the other Coord.
Definition: NanoVDB.h:1436
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:6584
__hostdev__ bool isValid() const
Definition: NanoVDB.h:7540
__hostdev__ ValueOnIter operator++(int)
Definition: NanoVDB.h:4500
__hostdev__ Coord operator-() const
Definition: NanoVDB.h:1407
int64_t Version
Definition: basic_types.h:31
__hostdev__ bool updateBBox()
Updates the local bounding box of active voxels in this node. Return true if bbox was updated...
Definition: NanoVDB.h:6387
__hostdev__ T Abs(T x)
Definition: NanoVDB.h:1166
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:2854
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:7546
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:5206
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:3829
__hostdev__ Coord operator<<(IndexType n) const
Definition: NanoVDB.h:1349
__hostdev__ const NanoGrid< Point > & grid() const
Definition: NanoVDB.h:7666
__hostdev__ Vec3 operator-(const Vec3 &v) const
Definition: NanoVDB.h:1573
__hostdev__ Coord operator&(IndexType n) const
Return a new instance with coordinates masked by the given unsigned integer.
Definition: NanoVDB.h:1346
#define SIZE
Definition: simple.C:41
static __hostdev__ uint64_t memUsage()
return memory usage in bytes for the class
Definition: NanoVDB.h:4011
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:6975
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:7279
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:3430
__hostdev__ CoordT dim() const
Definition: NanoVDB.h:2443
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:4542
typename Mask< 3 >::template Iterator< ON > MaskIterT
Definition: NanoVDB.h:6032
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args) const
Definition: NanoVDB.h:7103
__hostdev__ Coord()
Initialize all coordinates to zero.
Definition: NanoVDB.h:1290
GLubyte GLubyte GLubyte GLubyte w
Definition: glcorearb.h:857
__hostdev__ int32_t & x()
Definition: NanoVDB.h:1316
__hostdev__ bool isInside(const BBox &b) const
Return true if the given bounding box is inside this bounding box.
Definition: NanoVDB.h:2451
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:6162
__hostdev__ bool operator==(const Vec4 &rhs) const
Definition: NanoVDB.h:1736
static constexpr bool value
Definition: NanoVDB.h:420
__hostdev__ ChannelT & getValue(const Coord &ijk) const
Return the value from a cached channel that maps to the specified coordinate.
Definition: NanoVDB.h:7778
__hostdev__ Vec3T indexToWorld(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:3760
__hostdev__ bool isInside(const Vec3T &xyz)
Definition: NanoVDB.h:2258
Definition: core.h:1131
IMATH_INTERNAL_NAMESPACE_HEADER_ENTER IMATH_HOSTDEVICE constexpr T abs(T a) IMATH_NOEXCEPT
Definition: ImathFun.h:26
__hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
Constructor from major.minor.patch version numbers.
Definition: NanoVDB.h:938
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:4419
__hostdev__ bool isOff() const
Definition: NanoVDB.h:2767
Bit-compacted representation of all three version numbers.
Definition: NanoVDB.h:924
static __hostdev__ size_t memUsage()
Definition: NanoVDB.h:1324
__hostdev__ Vec3T indexToWorldGradF(const Vec3T &grad) const
Transforms the gradient from index space to world space.
Definition: NanoVDB.h:3798
Fp4 BuildType
Definition: NanoVDB.h:5565
__hostdev__ enable_if< BuildTraits< T >::is_index, const uint64_t & >::type valueCount() const
Return the total number of values indexed by this IndexGrid.
Definition: NanoVDB.h:3730
__hostdev__ bool isActive() const
Return true if this node or any of its child nodes contain active values.
Definition: NanoVDB.h:5279
Visits all active values in a leaf node.
Definition: NanoVDB.h:6035
__hostdev__ Vec3 & normalize()
Definition: NanoVDB.h:1614
static __hostdev__ Coord Floor(const Vec3T &xyz)
Return the largest integer coordinates that are not greater than xyz (node centered conversion)...
Definition: NanoVDB.h:1480
__hostdev__ ValueType getFirstValue() const
Return the first value in this leaf node.
Definition: NanoVDB.h:6244
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:6711
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:7543
__hostdev__ bool empty() const
Definition: NanoVDB.h:2314
GLboolean r
Definition: glcorearb.h:1222
__hostdev__ Iterator operator++(int)
Definition: NanoVDB.h:2371
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:3818
PUGI__FN char_t * translate(char_t *buffer, const char_t *from, const char_t *to, size_t to_length)
Definition: pugixml.cpp:8352
__hostdev__ Vec3T worldToIndexDirF(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:3793
__hostdev__ uint32_t gridIndex() const
Definition: NanoVDB.h:7558
typename ChildT::template MaskType< LOG2 > MaskType
Definition: NanoVDB.h:4967
__hostdev__ bool isValid() const
Methods related to the classification of this grid.
Definition: NanoVDB.h:3813
__hostdev__ T lengthSqr() const
Definition: NanoVDB.h:1564
__hostdev__ Coord & operator&=(int n)
Definition: NanoVDB.h:1377
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:4574
__hostdev__ bool operator<(const Version &rhs) const
Definition: NanoVDB.h:946
__hostdev__ const BBox< CoordType > & bbox() const
Return a const reference to the bounding box in index space of active values in this internal node an...
Definition: NanoVDB.h:5185
__hostdev__ Coord floor() const
Round each component if this Vec<T> up to its integer value.
Definition: NanoVDB.h:1650
__hostdev__ bool isMask() const
Definition: NanoVDB.h:3822
__hostdev__ AccessorType getAccessor() const
Return a new instance of a ReadAccessor used to access values in this grid.
Definition: NanoVDB.h:3746
__hostdev__ uint8_t octant() const
Return the octant of this Coord.
Definition: NanoVDB.h:1492
__hostdev__ BBox< Vec3< RealT > > asReal() const
Definition: NanoVDB.h:2464
typename DataType::ValueT ValueType
Definition: NanoVDB.h:4326
static const int SIZE
Definition: NanoVDB.h:1849
__hostdev__ bool operator==(const BaseBBox &rhs) const
Definition: NanoVDB.h:2216
__hostdev__ T Pow4(T x)
Definition: NanoVDB.h:1161
__hostdev__ T & getValue(const Coord &ijk, T *channelPtr) const
Return the value from a specified channel that maps to the specified coordinate.
Definition: NanoVDB.h:7794
__hostdev__ ValueOffIterator(const LeafNode *parent)
Definition: NanoVDB.h:6079
__hostdev__ bool probeValue(const CoordT &ijk, ValueType &v) const
Return true if the voxel value at the given coordinate is active and updates v with the value...
Definition: NanoVDB.h:6274
__hostdev__ bool operator>(const Version &rhs) const
Definition: NanoVDB.h:948
__hostdev__ int blindDataCount() const
Definition: NanoVDB.h:7565
__hostdev__ const Vec3T & max() const
Definition: NanoVDB.h:2223
__hostdev__ const Map & map() const
Definition: NanoVDB.h:7561
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:6754
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:4578
static __hostdev__ float value()
Definition: NanoVDB.h:1020
__hostdev__ Vec3 operator/(const T &s) const
Definition: NanoVDB.h:1577
__hostdev__ Mask & operator=(const Mask &other)
Definition: NanoVDB.h:2947
__hostdev__ bool operator<=(const Coord &rhs) const
Return true if this Coord is lexicographically less or equal to the given Coord.
Definition: NanoVDB.h:1365
__hostdev__ GridType mapToGridType()
Maps from a templated build type to a GridType enum.
Definition: NanoVDB.h:2031
auto size() const FMT_NOEXCEPT-> size_t
Definition: core.h:802
__hostdev__ T & operator[](int i)
Definition: NanoVDB.h:1750
bool isValid() const
Definition: NanoVDB.h:7840
__hostdev__ const ValueType & background() const
Return a const reference to the background value.
Definition: NanoVDB.h:4043
__hostdev__ bool isActive() const
Definition: NanoVDB.h:5053
__hostdev__ bool operator==(const Version &rhs) const
Definition: NanoVDB.h:945
__hostdev__ bool isActive(const CoordType &ijk) const
Return the active state of the given voxel (regardless of state or location in the tree...
Definition: NanoVDB.h:4034
__hostdev__ float Sqrt(float x)
Return the square root of a floating-point value.
Definition: NanoVDB.h:1214
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:6748
__hostdev__ const BlindDataT * getBlindData(uint32_t n) const
Definition: NanoVDB.h:3877
__hostdev__ void setValue(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:5979
__hostdev__ uint64_t volume() const
Definition: NanoVDB.h:2444
__hostdev__ Vec4 operator*(const T &s) const
Definition: NanoVDB.h:1763
type
Definition: core.h:1059
__hostdev__ Vec3 & operator+=(const Vec3 &v)
Definition: NanoVDB.h:1578
uint8_t ValueType
Definition: NanoVDB.h:1850
typename BuildT::ValueType ValueType
Definition: NanoVDB.h:3695
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:7551
typename RootT::ValueType ValueType
Definition: NanoVDB.h:3990
__hostdev__ Vec3 & maxComponent(const Vec3 &other)
Perform a component-wise maximum with the other Coord.
Definition: NanoVDB.h:1628
#define __hostdev__
Definition: NanoVDB.h:213
__hostdev__ auto set(const CoordType &ijk, ArgsT &&...args)
Definition: NanoVDB.h:4138
__hostdev__ bool isEmpty() const
Definition: NanoVDB.h:7571
__hostdev__ Map()
Default constructor for the identity map.
Definition: NanoVDB.h:3151
__hostdev__ ValueType & operator[](IndexType i)
Return a non-const reference to the given Coord component.
Definition: NanoVDB.h:1332
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:5095
Dummy type for a 16bit quantization of float point values.
Definition: NanoVDB.h:276
__hostdev__ const NodeTrait< TreeT, LEVEL >::type * getNode() const
Definition: NanoVDB.h:7227
Implements Tree::getNodeInfo(Coord)
Definition: NanoVDB.h:3465
static __hostdev__ auto set(typename NanoRoot< BuildT >::Tile &, const ValueT &)
Definition: NanoVDB.h:8076
__hostdev__ bool operator!=(const Mask &other) const
Definition: NanoVDB.h:2962
static constexpr bool is_index
Definition: NanoVDB.h:451
static __hostdev__ Coord OffsetToLocalCoord(uint32_t n)
Definition: NanoVDB.h:5257
bool ValueType
Definition: NanoVDB.h:5729
C++11 implementation of std::is_floating_point.
Definition: NanoVDB.h:439
__hostdev__ T & operator[](int i)
Definition: NanoVDB.h:1554
__hostdev__ Vec3 & operator=(const Vec3T< T2 > &rhs)
Definition: NanoVDB.h:1545
__hostdev__ Vec3(T x, T y, T z)
Definition: NanoVDB.h:1523
LeafNodeType Node0
Definition: NanoVDB.h:3998
__hostdev__ Coord(ValueType n)
Initializes all coordinates to the given signed integer.
Definition: NanoVDB.h:1296
double mTaperD
Definition: NanoVDB.h:3148
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:3415
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:5005
static __hostdev__ Coord min()
Definition: NanoVDB.h:1322
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4624
__hostdev__ Coord & operator<<=(uint32_t n)
Definition: NanoVDB.h:1384
typename Node2::ChildNodeType Node1
Definition: NanoVDB.h:3997
C++11 implementation of std::enable_if.
Definition: NanoVDB.h:469
math::Extrema extrema(const IterT &iter, bool threaded=true)
Iterate over a scalar grid and compute extrema (min/max) of the values of the voxels that are visited...
Definition: Statistics.h:354
uint8_t mCode[1u<< (3 *LOG2DIM-1)]
Definition: NanoVDB.h:5568
typename BuildT::CoordType CoordType
Definition: NanoVDB.h:3697
__hostdev__ Iterator & operator++()
Definition: NanoVDB.h:2857
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:6261
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:2883
GLint GLsizei count
Definition: glcorearb.h:405
8-bit red, green, blue, alpha packed into 32 bit unsigned int
Definition: NanoVDB.h:1840
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:4677
__hostdev__ ValueOffIterator beginValueOff() const
Definition: NanoVDB.h:6097
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:7541
__hostdev__ Vec4(const Vec4T< T2 > &v)
Definition: NanoVDB.h:1731
__hostdev__ bool isCompatible() const
Definition: NanoVDB.h:954
Definition: format.h:895
__hostdev__ DenseIterator beginDense() const
Definition: NanoVDB.h:5139
__hostdev__ ValueOffIterator cbeginValueOff() const
Definition: NanoVDB.h:6098
C++11 implementation of std::is_same.
Definition: NanoVDB.h:418
static __hostdev__ uint32_t padding()
Definition: NanoVDB.h:6226
__hostdev__ ValueOnIterator beginValueOn()
Definition: NanoVDB.h:4511
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:6911
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:5000
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:6598
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:6753
__hostdev__ ValueType getLastValue() const
If the last entry in this node's table is a tile, return the tile's value. Otherwise, return the result of calling getLastValue() on the child.
Definition: NanoVDB.h:5196
__hostdev__ T length() const
Definition: NanoVDB.h:1757
Trait to map from LEVEL to node type.
Definition: NanoVDB.h:6440
__hostdev__ DataType * data()
Definition: NanoVDB.h:6160
**Note that the tasks the is the thread number *for the pool
Definition: thread.h:637
__hostdev__ Vec4 & minComponent(const Vec4 &other)
Perform a component-wise minimum with the other Coord.
Definition: NanoVDB.h:1792
__hostdev__ Vec4 & operator+=(const Vec4 &v)
Definition: NanoVDB.h:1765
__hostdev__ ConstDenseIterator cbeginDense() const
Definition: NanoVDB.h:4565
float mVecF[3]
Definition: NanoVDB.h:3143
__hostdev__ bool isEmpty() const
Return true if this RootNode is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:4615
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:8074
typename NanoLeaf< BuildT >::FloatType FloatType
Definition: NanoVDB.h:8181
__hostdev__ Coord & operator=(const CoordT &other)
Assignment operator that works with openvdb::Coord.
Definition: NanoVDB.h:1336
__hostdev__ Coord & operator>>=(uint32_t n)
Definition: NanoVDB.h:1391
__hostdev__ void setAvg(const bool &)
Definition: NanoVDB.h:5758
__hostdev__ Coord round() const
Round each component if this Vec<T> to its closest integer value.
Definition: NanoVDB.h:1656
static __hostdev__ auto set(NanoLower< BuildT > &, uint32_t, const ValueT &)
Definition: NanoVDB.h:8078
static constexpr bool is_FpX
Definition: NanoVDB.h:456
static __hostdev__ uint32_t bitCount()
Return the number of bits available in this Mask.
Definition: NanoVDB.h:2815
Rgba8 & operator=(Rgba8 &&)=default
Default move assignment operator.
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache Noop since this template specializa...
Definition: NanoVDB.h:6582
__hostdev__ uint32_t gridIndex() const
Return index of this grid in the buffer.
Definition: NanoVDB.h:3720
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:4394
Implements Tree::getDim(Coord)
Definition: NanoVDB.h:3459
ChildT ChildNodeType
Definition: NanoVDB.h:4320
__hostdev__ const NodeTrait< RootT, 2 >::type * getFirstUpper() const
Definition: NanoVDB.h:4129
GLenum src
Definition: glcorearb.h:1793
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &v)
Sets the value at the specified location but leaves its state unchanged.
Definition: NanoVDB.h:6256
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this leaf node.
Definition: NanoVDB.h:6165
__hostdev__ const uint64_t & valueCount() const
Return total number of values indexed by the IndexGrid.
Definition: NanoVDB.h:7758
PcpNodeRef_ChildrenIterator begin(const PcpNodeRef::child_const_range &r)
Support for range-based for loops for PcpNodeRef children ranges.
Definition: node.h:558
__hostdev__ float Fract(float x)
Definition: NanoVDB.h:1121
__hostdev__ Version()
Default constructor.
Definition: NanoVDB.h:929
unsigned char byte
Definition: UT_Span.h:163