Trying to digest what I just saw in the keynote… some questions come to mind:
- How is this (APEX and Animation Context) different from KineFX? Does KineFX have different type of skeleton? Will they merge over time? If not - when do you use which?
- With the new tools and ML stuff, any hope of being able to process video or audio to rough out speech or character animation?
- I didn’t see some ‘standard’ animation tools, for example pose library or dopesheet. Are they in there, just not shown?
- What about layering and NLA? What if I wanted to create a layer of an animation or mix it with a mocap? Do I do that in KineFX or this new Animation context?
- Compared to Maya’s way of animating, are there any glaring areas/gaps that still need to be addressed in future releases?
- Is the Animation context completely isolated (the APEX Animation Node) or can it actually integrated and interact with things? For example if I have RBD and I wanted to interact with the ragdoll?
- Any changes in blend shape animation or is that not supported in that context? For example is there a brush where one can rough out the blend shapes quickly?
- How does one skin the model to the rig? Painting and XSI like attribute value editing supported?
- Does the animation context work with the new muscles?
- Does this integrate with crowds?
- Are quad rigs supported?
I’m sure we’ll have more question :-)
Cheers
H20 questions about the new animation system
5938 25 4- LukeP
- Member
- 374 posts
- Joined: 3月 2009
- Offline
- edward
- Member
- 7899 posts
- Joined: 7月 2005
- Offline
- KineFX is the generally umbrella term for these new procedural character tools in Houdini. It defines the basic skeleton geometry attributes and how to interpret a geometry skeleton.
- APEX is the rigging engine for evaluating rigs for a character. Originally, the idea was merely to use it take controller input and have it output joint transforms. It then turned out that to really do the animation tools that we wanted, a lot more data was necessary than could be fit in a single skeleton geometry, or have so many SOP streams that it wouldn't be very manageable. So the packed folders idea was born. Then what followed was that it was simpler to also have it run the Joint/BoneDeform inside the rig itself.
- It's still early days with a lot of work ahead of us so there are still some gaps as we bring the existing "scene level" character toolset into KineFX like control animation layering, Pose Library, Character Picker, and Pose Space Deformation. Dopesheet is part of the Animation Editor, which has always been there. An NLE motion mixer was in the works but unfortunately didn't make it in time for H20. ML of course is being taken seriously for the future but again, it's early days.
- The new animation "context" is a the "viewport state" for the APEX Scene Animate SOP which works on "APEX Scenes" (which are these packed folder geometries with all the data needed for animation).
- The procedural skeleton animation layering of course is still an option as always with KineFX. Esther showed in the keynote a little bit of how this works. You can evaluate the APEX graphs for just outputting per-frame skeletons to go into the traditional KineFX pipeline, or you can go in the other direction as well. This allows you interoperate with all the other existing systems in Houdini. Note that we've also improved interoperability for KineFX transform attributes with Vellum.
- A variant of the ragdoll solver has been brought into APEX itself so that ragdoll animation on APEX rigs can be done as fast as possible.
- Blendshapes (and Bone Deform for that matter) are currently done inside APEX rigs via use of SOPs since every compilable SOP is available for APEX to invoke. In Houdini, "blendshapes" are just regular geometry and you just sculpt them in Houdini like any other geometry. At some point, they need to be added to the "APEX Scene". APEX rig controllers can be given custom channels which comes up when you select the controller.
- Skinning in Houdini hasn't changed: biharmonic or proximity initial skinning followed by weight painting, table editing, etc.
- The animation "context" is a node that evaluates APEX graphs to apply animation and (commonly) to deform the skin. Once you have the deformed skin, it fits into the existing pipeline as usual. It's a long term goal to bring more of CFX directly upstreamed into animation but it's not clear what's the best way is right now. Using a ML deformer is one way that we can see it working for muscles in particular as highlighted in the keynote.
- APEX rigs are great crowds. We have an APEX node that can set transforms directly inside an agent primitive. So this means you can set up a pipeline where you can animate with your hero rig directly on an agent.
- Quad rigs are like any other rig. The autorig components right now are geared towards bipeds but one can create their own component scripts for quadrupeds. And ditto for "quadraped" crowd agents as they're just like any other rigged character be they bipeds, birds, trees, etc.
- LukeP
- Member
- 374 posts
- Joined: 3月 2009
- Offline
- LukeP
- Member
- 374 posts
- Joined: 3月 2009
- Offline
edward
- KineFX is the generally umbrella term for these new procedural character tools in Houdini. It defines the basic skeleton geometry attributes and how to interpret a geometry skeleton.
- APEX is the rigging engine for evaluating rigs for a character. Originally, the idea was merely to use it take controller input and have it output joint transforms. It then turned out that to really do the animation tools that we wanted, a lot more data was necessary than could be fit in a single skeleton geometry, or have so many SOP streams that it wouldn't be very manageable. So the packed folders idea was born. Then what followed was that it was simpler to also have it run the Joint/BoneDeform inside the rig itself.
- It's still early days with a lot of work ahead of us so there are still some gaps as we bring the existing "scene level" character toolset into KineFX like control animation layering, Pose Library, Character Picker, and Pose Space Deformation. Dopesheet is part of the Animation Editor, which has always been there. An NLE motion mixer was in the works but unfortunately didn't make it in time for H20. ML of course is being taken seriously for the future but again, it's early days.
- The new animation "context" is a the "viewport state" for the APEX Scene Animate SOP which works on "APEX Scenes" (which are these packed folder geometries with all the data needed for animation).
- The procedural skeleton animation layering of course is still an option as always with KineFX. Esther showed in the keynote a little bit of how this works. You can evaluate the APEX graphs for just outputting per-frame skeletons to go into the traditional KineFX pipeline, or you can go in the other direction as well. This allows you interoperate with all the other existing systems in Houdini. Note that we've also improved interoperability for KineFX transform attributes with Vellum.
- A variant of the ragdoll solver has been brought into APEX itself so that ragdoll animation on APEX rigs can be done as fast as possible.
- Blendshapes (and Bone Deform for that matter) are currently done inside APEX rigs via use of SOPs since every compilable SOP is available for APEX to invoke. In Houdini, "blendshapes" are just regular geometry and you just sculpt them in Houdini like any other geometry. At some point, they need to be added to the "APEX Scene". APEX rig controllers can be given custom channels which comes up when you select the controller.
- Skinning in Houdini hasn't changed: biharmonic or proximity initial skinning followed by weight painting, table editing, etc.
- The animation "context" is a node that evaluates APEX graphs to apply animation and (commonly) to deform the skin. Once you have the deformed skin, it fits into the existing pipeline as usual. It's a long term goal to bring more of CFX directly upstreamed into animation but it's not clear what's the best way is right now. Using a ML deformer is one way that we can see it working for muscles in particular as highlighted in the keynote.
- APEX rigs are great crowds. We have an APEX node that can set transforms directly inside an agent primitive. So this means you can set up a pipeline where you can animate with your hero rig directly on an agent.
- Quad rigs are like any other rig. The autorig components right now are geared towards bipeds but one can create their own component scripts for quadrupeds. And ditto for "quadraped" crowd agents as they're just like any other rigged character be they bipeds, birds, trees, etc.
Edward, maybe one more question if you don’t mind: what other things could the APEX architecture/graph be usable for?
Regards.
- edward
- Member
- 7899 posts
- Joined: 7月 2005
- Offline
- tamte
- Member
- 8822 posts
- Joined: 7月 2007
- Online
LukePhusk procedurals are also using graph stored as geometry and the execution is delayed till the procedural needs to be resolved, which sounds very similar to the new rigging approach, so I can imagine APEX could be useful for authoring those in the future, which could alleviate headaches with the current approach of trying to make the sop graph invokable or even more headaches when there is a need to edit it without having the source SOP network
what other things could the APEX architecture/graph be usable for?
Edited by tamte - 2023年10月28日 12:01:28
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- edward
- Member
- 7899 posts
- Joined: 7月 2005
- Offline
- LukeP
- Member
- 374 posts
- Joined: 3月 2009
- Offline
edward
If you mean in H20, I'm not sure but someone will find other uses for it. For Houdini itself, APEX is a candidate for any future new features that require graph evaluation.
Would things like dynamics, comp or image synthesis benefit from this type of graph?
Regarding the NLE and Pose Based deformation - hope you guys can squeeze those in soon and we don’t have to wait 14 months for 20.5
Either way amazing work in H20. Mind blowing how much you guys managed to put into one release.
- tamte
- Member
- 8822 posts
- Joined: 7月 2007
- Online
edward
APEX shares the same limitation in that only compilable SOPs (aka "SOP verbs") can be used.
the issue about building the compilable graphs in SOPs currently beyond this limitation is that SOPs allow you to do things in a way that simply doesn't compile which is even step further from compilable blocks, like it requires advanced techniques like parameter overrides from spare inputs instead of expressions, so it's difficult to flip existing setups to work as procedurals without deeper knowledge
so any framework that makes sure that any graph that user builds can be invoked would be a step forward, so fingers crossed that APEX will be able to fill this gap
Edited by tamte - 2023年10月28日 13:46:44
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- tamte
- Member
- 8822 posts
- Joined: 7月 2007
- Online
LukePwhile waiting you can probably already use ML to do pose space deformation in H20
Pose Based deformation - hope you guys can squeeze those in soon and we don’t have to wait 14 months for 20.5
since it shouldn't matter whether your pose examples are made using sim (as in presentation) or just sculpting or procedurally
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- LukeP
- Member
- 374 posts
- Joined: 3月 2009
- Offline
tamteLukePwhile waiting you can probably already use ML to do pose space deformation in H20
Pose Based deformation - hope you guys can squeeze those in soon and we don’t have to wait 14 months for 20.5
since it shouldn't matter whether your pose examples are made using sim (as in presentation) or just sculpting or procedurally
Agreed but training models is not something artists and animators should be worried about
Also hope that at some point the whole cache of animation can be just sculpted on to fix minor issues here and there.
Was a bit disappointed not to see any basic sculpting enhancements in H20 (along with no rumoured comp revamp and image synthesis) but I guess we can’t have it all at once lol. It’s an amazing release. Watched the keynote twice already
But spoke to a few industry professionals who use Houdini for vfx and they’re all worried that Houdini stopped innovating in vfx space and will lose the focus due to rendering and animation priorities. Hope that’s not the case could have used OpenCL with sparse solver in this release though.
Edited by LukeP - 2023年10月28日 14:03:24
- kodra
- Member
- 373 posts
- Joined: 6月 2023
- Offline
- tamte
- Member
- 8822 posts
- Joined: 7月 2007
- Online
kodraAs I said, you don't have to use vellum or simulated shapes to train your ML
The Houdini keynote has something like "150k hours of vellum" so I think the ML skinning isn't something affordable for small studios...
You can potentially create all the examples using sculpting or procedurally the same way as traditional posespace examples are defined for RBF interpolation
It's obviously not to replace traditional techniques, but could be handy in meantime, however I can't imagine why it wouldn't be useful overall as trained ML can potentially be more robust in edge cases than RBF based pose space interpolation
As long as training is made into a simple UX it should be no more difficult for an artist
Edited by tamte - 2023年10月28日 14:32:21
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- kodra
- Member
- 373 posts
- Joined: 6月 2023
- Offline
It's very hard to imagine you can do it with manually sculpted models. I think you massively underestimated how large the training data set are for ML.
The keynote doesn't say how many examples are needed to train a good model, but if the number is less than 1000 I will be really, really surprised. My guess is at least 20k.
The keynote doesn't say how many examples are needed to train a good model, but if the number is less than 1000 I will be really, really surprised. My guess is at least 20k.
Edited by kodra - 2023年10月28日 15:24:05
- edward
- Member
- 7899 posts
- Joined: 7月 2005
- Offline
- edward
- Member
- 7899 posts
- Joined: 7月 2005
- Offline
tamte
As long as training is made into a simple UX it should be no more difficult for an artist
We're actually not that far off in terms of evaluation as APEX already has the nodes for doing RBF interpolation. What's missing is the UX that goes along with that. If someone wanted to manually sculpt for RBF and do their own setup for this in H20, it's probably doable with a minimal amount of work for SideFX to support.
- tamte
- Member
- 8822 posts
- Joined: 7月 2007
- Online
kodraI don't know if you need that many samples if you approach it as specific example pairs in the same way as traditional pose space angle/local corrective shape pairs
It's very hard to imagine you can do it with manually sculpted models. I think you massively underestimated how large the training data set are for ML.
The keynote doesn't say how many examples are needed to train a good model, but if the number is less than 1000 I will be really, really surprised. My guess is at least 20k.
I'd assume same amount of sculpts should be enough
Of course if you want to train it as high dimensional pose/full skin shape pairs you may need a lot of examples
But I admit that my understanding of ML is limited, so I may be wrong
Edited by tamte - 2023年10月28日 22:07:27
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- LukeP
- Member
- 374 posts
- Joined: 3月 2009
- Offline
tamtekodraI don't know if you need that many samples if you approach it as specific example pairs in the same way as traditional pose space angle/local corrective shape pairs
It's very hard to imagine you can do it with manually sculpted models. I think you massively underestimated how large the training data set are for ML.
The keynote doesn't say how many examples are needed to train a good model, but if the number is less than 1000 I will be really, really surprised. My guess is at least 20k.
I'd assume same amount of sculpts should be enough
Of course if you want to train it as high dimensional pose/full skin shape pairs you may need a lot of examples
But I admit that my understanding of ML is limited, so I may be wrong
Nope. You’re right. It’s definitely not a few sculpts and normally would be in thousands per my understanding.
- tamte
- Member
- 8822 posts
- Joined: 7月 2007
- Online
LukeP
It’s definitely not a few sculpts and normally would be in thousands per my understanding.
I wouldn't assume you need thousands for something that just mimics RBF weighting based on a few example poses
I'm not talking about training full skinning weights but blend shape weights based on local space bone transforms
RBF itself is example based technique, not sure why would ML need more examples to figure out simple weights
Edited by tamte - 2023年10月28日 22:36:39
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- kodra
- Member
- 373 posts
- Joined: 6月 2023
- Offline
tamteLukeP
It’s definitely not a few sculpts and normally would be in thousands per my understanding.
I wouldn't assume you need thousands for something that just mimics RBF weighting based on a few example poses
I'm not talking about training full skinning weights but blend shape weights based on local space bone transforms
RBF itself is example based technique, not sure why would ML need more examples to figure out simple weights
Oh I see, you mean to infer {bone transforms -> blend shape weight}. Then the best solution is... probably just RBF.
We will only know when we actually see it, but I really don't think when you have only a few examples, more complex network will perform better than RBF. The math behind RBF is actually very similar to neutral network.
Edited by kodra - 2023年10月29日 02:12:20
-
- Quick Links