Houdini 12 .bgeo/.geo (new) format

   18968   19   2
User Avatar
Member
5 posts
Joined: 4月 2012
Offline
Hi!

I´m about to write a .bgeo reader possibly writer based on the new .bgeo format.

I have browsed the HDK and documentation and found file format information and also the link to the open source houdinigpdlibra.

But everything i read seems to be refering to the old format (both bgeo and geo). I downloaded the apprentice version 12.0.543.9 and exported a .geo file and it does not look like the file format
explained here http://www.sidefx.com/docs/houdini12.0/io/formats/geo [sidefx.com]
but more like some JSON inspired format.

Does anyone know where i can find detailed information about the new file format?? any help is appreciated
User Avatar
Member
4271 posts
Joined: 7月 2005
Offline
Have a look at $HFS/houdini/public/hgeo and $HFS/houdini/public/binary_json
if(coffees<2,round(float),float)
User Avatar
Member
12669 posts
Joined: 7月 2005
Offline
There is bit of info here too:
http://www.sidefx.com/docs/hdk12.0/hdk_ga_using.html#HDK_GA_FileFormat [sidefx.com]
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
5 posts
Joined: 4月 2012
Offline
thx
User Avatar
Member
5 posts
Joined: 4月 2012
Offline
Does anyone know where i can find
UT_JSONParser.C
which is referenced in the HDK docs

or are implementations not included in the HDK
User Avatar
Member
1390 posts
Joined: 7月 2005
Offline
ChristopherSWE
Does anyone know where i can find
UT_JSONParser.C
which is referenced in the HDK docs

or are implementations not included in the HDK

I think it's a typo. They meant *.h file, since mentioning “license cost” wouldn't have sense with open source implementation.
User Avatar
Member
5 posts
Joined: 4月 2012
Offline
Ok too bad. Well i've have looked at the UT_JSON* files and i´m starting to see some light in the tunnel but an example using this api would have been quite nice.

As i've understood so far, to parse a file, to begin with need to create an UT_Istream which is basically a modified istream and also derive the UT_JSONHandle which has lots of callbacks that gets triggered from the UT_JSONParser class e.g at the beginning of an array (which in this case i interpret as a json array i.e ‘[’). Though when these callbacks are triggered i am responsible for actually reading any data. Which can be done by using the traverser class and the UT_JSONParser methods e.g parseValue, parseUniformArray etc.

one fundemental thing which i´m not certain about is that the parseObject methods in UT_JSONParser parses a whole file and that “object” in this case refers to a whole file and not a json object. Also i dont quite understand how the actual parsing of the file is done. i´m i responsible for traversing the stream or will this magically be taken care of in the JSONParser class and then i do all the work in the callbacks :?

well, well, any thoughts are welcome
User Avatar
Member
454 posts
Joined: 7月 2005
Offline
Has anyone seen performance stats on the new bgeo format?

What have people argued for/against when discussing changing over?

Thanks!
User Avatar
Member
454 posts
Joined: 7月 2005
Offline
The word is:

“The geometry file format was updated primarily to support the new features introduced in the new geometry library for H12, not in an effort to correct performance problems with the old format. That said, the new format, while significantly more verbose is much more closely tied to the structure of the new geometry library, and so should be more efficient in almost all cases.

The new file format is much more robust and extensible, allowing HDK created primitive and attribute types to be saved and loaded. It supports all the new attribute storage types natively and so can be more compact. It's paged and has support for constant pages.

Unless you need to load the geometry in an application that only understands the old format, you should be using the new. ”
User Avatar
Member
96 posts
Joined: 5月 2008
Offline
The full info should be in a file called gpd.txt, in my case it is here: /Library/Frameworks/Houdini.framework/Versions/12.0.556/Resources/houdini/public/GPD/GPD.txt (that's on a mac)
User Avatar
Member
454 posts
Joined: 7月 2005
Offline
That file refers to the old bgeo format, and the sourceforge code it refers to is very outdated, and doesn't work in every case even then. I know cause I'm a maintainer…

It would be great to see SESI release similar tools for the new bgeo format. I guess Alembic kind of reduces the demand for that though…

Cheers
User Avatar
Member
96 posts
Joined: 5月 2008
Offline
Ah, sorry I wasn't aware of that. And yes, Alembic looks very promising, and I'm eagerly awaiting their Python bindings (which I hear are coming out soon…)
User Avatar
Member
96 posts
Joined: 5月 2008
Offline
Are there any new infos on the new file format yet?
User Avatar
Member
15 posts
Joined: 3月 2010
Offline
Hi,

I'm also planning to implement a reader/writer of the new format in c++. I had a look at the python reference implementation which comes with the install (located in $HDIR/houdini/public/binary_json/). When I save a geo file I get a proper json file and running the python scripts (cat/validate) everything looks fine. Saving my geometry as a bgeo file gives me some binary with a NSJb magic number so that looks good. However, running the validate/cat python scripts produces the following error message:


houdini_bjson>python json_binary_cat.py test.bgeo
test.bgeo
[“fileversion”,“12.0.543.9”,“pointcount”========================================
====================
Parsing failed!
============================================================
JSON Parsing: Read -1 bytes past end of stream



To summarize: running the scripts on a geo (JSON) file works fine but it fails on a binary json file.

Does anybody have an idea whats going on here? Looks like a bug in the reference implementation.

Thanks for any kind of advice in advance,
David

P.S.: if someone wants to join forces on this, feel free to pm me.
User Avatar
Member
15 posts
Joined: 3月 2010
Offline
Hi,

I have a question regarding the binary json. Does anybody know what the packing means for the attribute data section? Seems the packing is specified as an uniform array where for 4 component vectors means that packing is xyz xyz … w w. I wonder how packing would look like for xyzwxyzw. Does anybody know it exactly?

I would guess xyzwxyzw packing would be
and xxxyyyzzzwww would be but I am not sure…

Thanks,
David
User Avatar
スタッフ
1081 posts
Joined: 7月 2005
Offline
skydave
Hi,
Does anybody know what the packing means for the attribute data section? Seems the packing is specified as an uniform array where for 4 component vectors means that packing is xyz xyz … w w. I wonder how packing would look like for xyzwxyzw. Does anybody know it exactly?

I would guess xyzwxyzw packing would be
and xxxyyyzzzwww would be but I am not sure…

The “packing” and “pagesize” fields are only used for interpreting how data is packed in the “rawpagedata” array. The pages are sequential in the “rawpagedata” array, and each page is packed as per the “packing field”.

Suppose you have 4 elements with values .., and your page size is 2, for a simple example.

A packing of means that

“rawpagedata”,[
X0, Y0, Z0, X1, Y1, Z1, W0, W1 # page 0 (subvector0, subvector1)
X2, Y2, Z2, X3, Y3, Z3, W2, W3 # page 1 (subvector0, subvector1)
]


A packing of means that

“rawpagedata”,[
X0, Y0, Z0, W0, X1, Y1, Z1, W1 # page 0 (subvector0)
X2, Y2, Z2, Z3, X3, Y3, Z3, W3 # page 1 (subvector0)
]


A packing of means that

“rawpagedata”,[
X0, X1, Y0, Y1, Z0, Z1, W0, W1 # page 0 (subvector0, subvector1, subvector2, subvector3)
X2, X3, Y2, Y3, Z2, Z3, W2, W3 # page 1 (subvector0, subvector1, subvector2, subvector3)
]


Worth noting is that packing is optional, and if missing, means that the same as would.

Finally, we internally also support a “constantpageflags”, which is just an array of boolean arrays, with one boolean array for each subvector, with each value indicating whether that page is constant.

For example, with a packing of , and X0=X1, Y0=Y1 and Z0=Z1, we could have

“constantpageflags”,[,]
“rawpagedata”,[
X0, Y0, Z0, W0, W1 # page 0 (subvector0, subvector1)
X2, Y2, Z2, X3, Y3, Z3, W2, W3 # page 1 (subvector0, subvector1)
]


You probably don't need to worry to much about this last. Houdini won't use the “constantpageflags” by default, though one could save geometry files with it using the HDK.
User Avatar
Member
15 posts
Joined: 3月 2010
Offline
Hi Ondrej,

thanks for your explanation - very useful. Say, is there some sort of document which explains the json scheme for houdini (b)geo files? All we have is the hgeo.py from the houdini install but its not really complete.

David
User Avatar
Member
15 posts
Joined: 3月 2010
Offline
Hi,

now I'm looking at loading volume data from b(geo) and I wonder what the different compression types do:

“compressiontypes”, “raw”,“rawfull”,“constant”,“fpreal16”,“FP32Range”],


whats the difference between raw and rawfull? How does FP32Range look like?

Also I wonder how the tiledarray works. looks like its just pages of data which came from splitting the array into parts. What I dont get is what drives the size of the tiles which seems to be random. does a tile have some spatial relationship or is it just a chunk of memory stream?

Last question: I see the (b)geo format sometimes uses arrays in a way where every first item is a string and every second item depends on the value of the string which came before e.g. pointcount->int, topology->array. Why dont you use maps for these kind of key value pairs?


Thanks for your time, much appreciated.

David
User Avatar
スタッフ
1081 posts
Joined: 7月 2005
Offline
skydave
Say, is there some sort of document which explains the json scheme for houdini (b)geo files? All we have is the hgeo.py from the houdini install but its not really complete.

Sadly, not yet.
User Avatar
スタッフ
1081 posts
Joined: 7月 2005
Offline
skydave
whats the difference between raw and rawfull? How does FP32Range look like?

Also I wonder how the tiledarray works. looks like its just pages of data which came from splitting the array into parts. What I dont get is what drives the size of the tiles which seems to be random. does a tile have some spatial relationship or is it just a chunk of memory stream?

I don't know offhand. Maybe somebody else will chime in.

skydave
Last question: I see the (b)geo format sometimes uses arrays in a way where every first item is a string and every second item depends on the value of the string which came before e.g. pointcount->int, topology->array. Why dont you use maps for these kind of key value pairs?

JSON maps are unordered, so to remain true to the standard we use arrays to impose an order.
  • Quick Links