H20 questions about the new animation system

   5937   25   4
User Avatar
Member
111 posts
Joined: 1月 2018
Offline
in Keynot it was written 150 houers of velum simulation not "150k" also for this thin hippo 6k poses were used and for inflated one 3k. and it was R&D

This ML is fantastic! i hope that it will be possible to add extra poses using Lora or other checkpoints techniques or merging different models.

fe. prepared by SideFX fully anatomicly corect human like from Unreal 5.3 and modified it to more cartoony character using Lora checkpoint and extra poses. This mixing of ML models is a must option

Anyway iam trying to understand what this Apex is and .. hmm i have no clue xD

But i see that there are new manipulation gizmos! finally good for animators! (not those ugly thin one from normal Houdini. ahh i hate those blurry highlight on them, its soooo 90s)

i have QESTION is this APEX engin fast? and why it is fast compared to other frameworks? does it use CUDA for proccessing this packed geometry and stuff? (all i care is faaast feedback from animated characters)

Are there any plans to cache in RAM those APEX characters to gain even more speed?

AND how motion is stored in geometry? i mean WTF xD its like hard to swallow concept

what this mean for animator?
is it possible to modify those animated paths (baxier curves? wtf is it actually) using Houdini SOP nodes? fe. smoothing them or resampling?

anyway this APEX ROX i can feel it
User Avatar
Member
7899 posts
Joined: 7月 2005
Online
oldteapot7
i have QESTION is this APEX engin fast? and why it is fast compared to other frameworks? does it use CUDA for proccessing this packed geometry and stuff? (all i care is faaast feedback from animated characters)

In case you missed it, see also the other APEX thread [www.sidefx.com] where I tried to answer some of questions of what APEX is.

The short answer is that APEX was created to overcome the performance limitations with using VOPs/VEX for KineFX rigs. The node execution is issued from the CPU at the moment, but it has a very flexible design. The reason why it is fast is because it was designed for executing lots of nodes that each do very little work, which is quite different from the traditional Houdini cook engine.
User Avatar
Member
7899 posts
Joined: 7月 2005
Online
oldteapot7
Are there any plans to cache in RAM those APEX characters to gain even more speed?

The rig evaluation isn't the bottleneck anymore. You can always do geometry caches as before.

AND how motion is stored in geometry? i mean WTF xD its like hard to swallow concept

Animation is stored as keyframe channels inside new "channel" geometry primitives. So you can have a static geometry containing every thing one needs to evaluate a character for the entire shot.

is it possible to modify those animated paths (baxier curves? wtf is it actually) using Houdini SOP nodes? fe. smoothing them or resampling?

We don't have a lot of the operators on channel primitives fleshed out yet, but yes, that is the future.
User Avatar
Member
3 posts
Joined: 10月 2019
Offline
kodra
tamte
LukeP
It’s definitely not a few sculpts and normally would be in thousands per my understanding.

I wouldn't assume you need thousands for something that just mimics RBF weighting based on a few example poses

I'm not talking about training full skinning weights but blend shape weights based on local space bone transforms

RBF itself is example based technique, not sure why would ML need more examples to figure out simple weights

Oh I see, you mean to infer {bone transforms -> blend shape weight}. Then the best solution is... probably just RBF.

We will only know when we actually see it, but I really don't think when you have only a few examples, more complex network will perform better than RBF. The math behind RBF is actually very similar to neutral network.


Not trying to be a know-it-all, but let's be factual. The AI/ML space is innovating so fast that anything we say today is very likely to be much less accurate tomorrow. What used to need 24Gb of VRAM or more are running on cellphones now and not even high end ones. That isn't a great example, just an example. Fully expect that whatever issue you dream up will be a nothingburger within short order.
User Avatar
Member
373 posts
Joined: 6月 2023
Offline
vstevenson
Not trying to be a know-it-all, but let's be factual. The AI/ML space is innovating so fast that anything we say today is very likely to be much less accurate tomorrow. What used to need 24Gb of VRAM or more are running on cellphones now and not even high end ones. That isn't a great example, just an example. Fully expect that whatever issue you dream up will be a nothingburger within short order.

Yeah, except that's not factual. The most significant improvement we saw in the past few years was directly contributed by better models. In other words, more VRAM. You see people quantize them and make them smaller but that's pretty much it. Also the idea of quantization works since the models' sheer amount of parameters at the first place. I'm not saying there aren't other innovation, but no, the 50M model you run on your phone is not going to beat GPT-4 any time soon.

I think I'll stop here cause if I commented more on this reply the mods would have to lock this thread.
Edited by kodra - 2023年10月31日 00:38:26
User Avatar
Member
7899 posts
Joined: 7月 2005
Online
Must resist ... do NOT feed the ML trolls!
  • Quick Links