Hey,
So, for the past few weeks, I've been wanting to learn a little bit more about USD and Solaris. In order to combine the 2, I've decided to model the moana island data set [www.technology.disneyanimation.com] through Solaris.
I succesfully unwrapped most of the meshes, shapes, instance point clouds. Now I need to focus on the shading part.
For now, it's great to see how responsive Karma is at handling that much of data. Time to first pixel is extremely fast, in my opinion. I'd say it takes 3 to 4 seconds. In terms of ram, I didn't check it precisely but it was around 8 to 10Go. I'm looking forward to testing all of this with some displacement and more complex shader & texture calls now !
[ibb.co]
[ibb.co]
[ibb.co]
[ibb.co]
This is how I layouted it in USD for now, I don't know exactly how good it is and would be looking forward to any advices on it. The kinds are not perfectly tweaked but will be in a near future.
[ibb.co]
This is how it looks through the Houdini GL. There is a parameter somewhere in the display preferences that gives you control over the amount of polygons the viewport will show before the objects get displayed in bounding box. Very convenient
[ibb.co]
This is the overall set dressing graph. Pretty neat ! The branch on the right that is not connected correspond to the hero assets. For now, I don't understand why, but Karma crashes instantly when I want to render them. I'll likely be sending a repro scene to the houdini support in order to know if I'm the problem or if there is a bug .
[ibb.co]
A little close-up to a branch set where we can see that it's mostly one asset storing several variants that are fed into an instancer and then payloaded into the main branch and so on.
[ibb.co]
How did I approach it ?
I've made a python script that imports/tweaks and describe all the assets and sets from /obj/ to /stage/. It's a pretty long script (over 1000 lines) because I'm not the finest coder and because there is a lot of little exceptions here and there in the Data set. But yeah mostly because of the first reason .
Why am I doing it ?
- like I said, it's being part of a learning process. I am not extremely familiar with houdini even though I'm using it from time to time for some years now. I'd like to change this in a fairly near future .
- I didn't find anyone trying to transform the moana island in USD, yet. But I may have searched the wrong way. If you know someone who did this, please send me some links .
- I think it is extremely useful to rely on an open source technology like USD to be able to benchmark several render engines and testing USD implementations with the same “big ass” scene. Some examples are already there to help for that job (provided by sidefx, like the bar that is awesome), but I think that the moana island is pretty iconic. Also because it's a bigger stress test, overall.
When this is over, I'll release this work to the community so that it may help people into getting into USD. I'll also release the python code that generated it all.
My goal is to finish all of this by end of May, hopefully. I'm not fulltime on it.
Anyway, this journey is not over yet and this thread will be the witness of this work in progress. I also made it in order to ask for help with specific Solaris questions while the big picture is known.
These are the next steps (not displayed in the order I will deal with them) :
- Make the hero assets “Karma Friendly”.
I honestly don't know why it doesn't work out of the box since it's following pretty much the same logic as the rest of the setdress
- 2 asset trees are obj files that weigh a little less than 3 Go.
I can't open them in houdini, the ram skyrockets until it fails opening it. I successfully opened one file in maya that took a full night and it was taking 50go of ram. The big problem being that those trees have hundred of thousands of branches that are not combined. I'm pretty sure that combining them should make things much easier but I don't have a solution to do that automatically/safely. Any help would be appreciated on this :)
- I'm trying and failing at transforming all the .ptx files into bitmap files.
I'll be posting a thread somewhere here to ask for help for that matter.
- Creating shaders and pull out most of the data from the material .json files from the data set
- Assigning shaders in the /stage/ context
- replicating the lights/cameras that were provided
- adding the ocean (only the still one)
- adding the last simple little props
Cheers
Moana Island into USD with Solaris
6659 12 6- pgu
- Member
- 15 posts
- Joined: May 2014
- Offline
- pgu
- Member
- 15 posts
- Joined: May 2014
- Offline
Hey,
So I finally got to work the core of the “ptex bake into bitmap files” problem. I must admit it's been a pain . Not sure if this is something considered difficult or very straightforward. Some things about it were very shady in my opinion ^^ (like specifying an “uvSet” with a different name than “uv” in the baketexture node, for example). But I may not have followed the most direct route, I don't know…
The basic idea is :
- create an hda out of the solution I found
- use that hda in the top context and bake those hundreds of .ptx files ! It's going to be a first using those nodes, I'm excited ;)
I'm embedding the hda I did today for this task. It's practically my first one I must say, so it may not work properly on specific scenarios. That's the UI :
[ibb.co]
Hope it can help all the lost souls out there for that quest (that doesn't seem to happen a lot in our cg artist lives).
PS : Ultimately, is that HDA something I could upload to the content library ? I do not know if it's being first approved by the sidefx staff in order to preserve quality or if we can do that on our own.
So I finally got to work the core of the “ptex bake into bitmap files” problem. I must admit it's been a pain . Not sure if this is something considered difficult or very straightforward. Some things about it were very shady in my opinion ^^ (like specifying an “uvSet” with a different name than “uv” in the baketexture node, for example). But I may not have followed the most direct route, I don't know…
The basic idea is :
- create an hda out of the solution I found
- use that hda in the top context and bake those hundreds of .ptx files ! It's going to be a first using those nodes, I'm excited ;)
I'm embedding the hda I did today for this task. It's practically my first one I must say, so it may not work properly on specific scenarios. That's the UI :
[ibb.co]
Hope it can help all the lost souls out there for that quest (that doesn't seem to happen a lot in our cg artist lives).
PS : Ultimately, is that HDA something I could upload to the content library ? I do not know if it's being first approved by the sidefx staff in order to preserve quality or if we can do that on our own.
- tamte
- Member
- 8786 posts
- Joined: July 2007
- Offline
pgudoesn't sound like you'd have to do that, maybe try feeding parametric st coordinates into uv input of your ptex Texture VOP otherwise it will probably use uv, not sure as I don't use ptex, but I assume that may be what's happening
like specifying an “uvSet” with a different name than “uv” in the baketexture node, for example
pguI see only SideFX produced content in that library, like example files and HDA's to SideFX tutorials and various examples presented during launch etc…
Ultimately, is that HDA something I could upload to the content library ?
for HDA's there is for example orbolt
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- Siavash Tehrani
- Member
- 729 posts
- Joined: July 2005
- Offline
- wolfwood
- Member
- 4271 posts
- Joined: July 2005
- Offline
Very cool indeed. I'm especially interested in the memory requirements so far, really promising! When attempting this in Mantra I ended up having to use a Google Cloud instance that had +130GB of RAM to be renderable all at once. (This was using a heavily nested packed primitive workflow.)
if(coffees<2,round(float),float)
- Sixjames1000
- Member
- 52 posts
- Joined:
- Offline
- pgu
- Member
- 15 posts
- Joined: May 2014
- Offline
Hey,
Thanks everyone for the support !
@tamte : Thanks for that. I must admit I didn't have the time to test what you were suggesting, regarding the use of st instead of uv. But yeah it might work and be less stupid than what was found to make it work ^^
@DaJuice : Cheers mate !
@wolfwood : Yes, the karma support for multi instancing is nailing it for this kind of scenes !
@Sixjames1000 : Thanks ! I'll be sharing the setup when I feel this is a v1, which means the most complete “unfold” I can do (objects, textures, shading, instances), according to the data set. Hopefully a release by the end of this month !
—
For the past few days I've been working on the texture extraction side of things. It's a more tedious task than I expected, but I feel like I'm nearly at the end of this process ! I'll do another post later in order to explain a little bit more in details how I approached it and you'll tell me if there was a better route for this or not so much. It involved lots of python lines, some top & cop nodes (contexts that I never worked with) and a fair amount of baking times .
Also, regarding the bug I had on this step “Make the hero assets “Karma Friendly”.”. The awesome SideFX support told me a bug related to this was fixed and released in the H18.0.460 that's been out today. So I'll try this version as soon it gets installed at the office .
Cheers
Thanks everyone for the support !
@tamte : Thanks for that. I must admit I didn't have the time to test what you were suggesting, regarding the use of st instead of uv. But yeah it might work and be less stupid than what was found to make it work ^^
@DaJuice : Cheers mate !
@wolfwood : Yes, the karma support for multi instancing is nailing it for this kind of scenes !
@Sixjames1000 : Thanks ! I'll be sharing the setup when I feel this is a v1, which means the most complete “unfold” I can do (objects, textures, shading, instances), according to the data set. Hopefully a release by the end of this month !
—
For the past few days I've been working on the texture extraction side of things. It's a more tedious task than I expected, but I feel like I'm nearly at the end of this process ! I'll do another post later in order to explain a little bit more in details how I approached it and you'll tell me if there was a better route for this or not so much. It involved lots of python lines, some top & cop nodes (contexts that I never worked with) and a fair amount of baking times .
Also, regarding the bug I had on this step “Make the hero assets “Karma Friendly”.”. The awesome SideFX support told me a bug related to this was fixed and released in the H18.0.460 that's been out today. So I'll try this version as soon it gets installed at the office .
Cheers
- Sixjames1000
- Member
- 52 posts
- Joined:
- Offline
- mestela
- Member
- 1798 posts
- Joined: May 2006
- Offline
- dlee
- Staff
- 444 posts
- Joined: Sept. 2016
- Offline
- pgu
- Member
- 15 posts
- Joined: May 2014
- Offline
Hey,
A little update for this project and a question ^^ :
In my first “unwrapping” of the moana island, some things were obviously not there because now the necessary ram to “draw” it in houdini GL is about 30-35 Go. Then, rendering it to Karma adds another 30-35 Go of ram (and first time to pixel is now something like 50 sec). I've set most of the heavy stuff (instancers) with a “Render” purpose in order for the viewport not to load it in the houdiniGL world.
All the shaders are now created/assigned. The lookdev doesn't look very good at the moment. I think it comes from the rough solution that I came up with the json materials translation. Some attributes can't be found or probably don't behave the same in the karma principled shader. I don't know precisely. Also, there are certainly some fails in the translation ptex -> bitmap files and UV layouts juggling. So, at some point, I'll improve manually some of that.
–
I've a little question about the basisCurves prim. How can I tweak the width/radius/pscale ? I'm seeking for a solution in /stage/ land if possible.
Right now, I've done an attribWrangle after the sopImport with “Run on Elements of Array Attributes” checked and that vexpression :
f@widths = 1.0;
It did make a change in the karma render. They look like they have the same width as when they are drawn in houdini GL (very thin). But changing the value doesn't do anything.
If you have an idea of what I'm missing here…?
Thanks !
A little update for this project and a question ^^ :
In my first “unwrapping” of the moana island, some things were obviously not there because now the necessary ram to “draw” it in houdini GL is about 30-35 Go. Then, rendering it to Karma adds another 30-35 Go of ram (and first time to pixel is now something like 50 sec). I've set most of the heavy stuff (instancers) with a “Render” purpose in order for the viewport not to load it in the houdiniGL world.
All the shaders are now created/assigned. The lookdev doesn't look very good at the moment. I think it comes from the rough solution that I came up with the json materials translation. Some attributes can't be found or probably don't behave the same in the karma principled shader. I don't know precisely. Also, there are certainly some fails in the translation ptex -> bitmap files and UV layouts juggling. So, at some point, I'll improve manually some of that.
–
I've a little question about the basisCurves prim. How can I tweak the width/radius/pscale ? I'm seeking for a solution in /stage/ land if possible.
Right now, I've done an attribWrangle after the sopImport with “Run on Elements of Array Attributes” checked and that vexpression :
f@widths = 1.0;
It did make a change in the karma render. They look like they have the same width as when they are drawn in houdini GL (very thin). But changing the value doesn't do anything.
If you have an idea of what I'm missing here…?
Thanks !
Edited by pgu - June 7, 2020 18:52:58
- pgu
- Member
- 15 posts
- Joined: May 2014
- Offline
Hey,
I gave it a try with another vexpression. The attribute widths being a float array, I tried to set a value for each curve like so :
Which gives me this :
It technically works (widths contains now an array of 5400 values with a 1.0 val) but Karma doesn't replicate any change when I change the “val” variable in the vexpression.
That would mean that :
- “widths” is not the attribute I expect to hold those pscale/width/thickness curve values ?
- karma in H18.0.416 doesn't take into account widths values ?
- something else that I'm missing ^^ ?
Cheers
I gave it a try with another vexpression. The attribute widths being a float array, I tried to set a value for each curve like so :
string primPath = "/isBeach_xgGrass_set_crv/isBeach_xgGrass_python"; float val = 1.0; // get all the curves int crvNumber[] = usd_attrib(0, primPath, "curveVertexCounts"); // create a tmp array containing the val value for each curve float valHolder[]; for (int i = 0; i < len(crvNumber); i++) { valHolder[i] = val; }; // set the tmp array into the widths primVar usd_setattrib(0, primPath, "widths", valHolder);
Which gives me this :
It technically works (widths contains now an array of 5400 values with a 1.0 val) but Karma doesn't replicate any change when I change the “val” variable in the vexpression.
That would mean that :
- “widths” is not the attribute I expect to hold those pscale/width/thickness curve values ?
- karma in H18.0.416 doesn't take into account widths values ?
- something else that I'm missing ^^ ?
Cheers
Edited by pgu - June 8, 2020 07:42:23
- wuzelwazel
- Member
- 1 posts
- Joined: Sept. 2013
- Offline
- 2 asset trees are obj files that weigh a little less than 3 Go.
I can't open them in houdini, the ram skyrockets until it fails opening it. I successfully opened one file in maya that took a full night and it was taking 50go of ram. The big problem being that those trees have hundred of thousands of branches that are not combined. I'm pretty sure that combining them should make things much easier but I don't have a solution to do that automatically/safely. Any help would be appreciated on this
I'm not a Houdini expert, but I've spent a lot of time with the Moana data. I recently finished packaging the scene into Redshift proxy files.
You're definitely on the right track with the Ironwood trees. I was only able to work with them because the Cinema 4D obj importer allowed me to split the obj based on material assignments: all of the leaves were processed as one object and the branches as another. It opened relatively quickly and wasn't a memory hog. If you can find a method of doing the same it should help you out. I could even share the already combined objs with you if that helps.
-
- Quick Links