Hello. I recently discovered alternative approach to VAT (vertex animation textures) for skeletal meshes. It is already included in Unreal Engine in form of 3DsMax Script (StaticMeshSkeletalAnimation.ms) and material function ms_StaticMeshSkeletalAnimation.
It is explained a bit here: https://vimeo.com/266582237 [vimeo.com]
The idea is to bake a texture that holds bone assignments for each vertex. Then bake bone animation to separate textures.
This way animations can be shared between different meshes .
Also animation texture sizes are much smaller as it only requires 2 pixels per frame per bone to store.
This sounds like a great idea and it would be cool to have support for it in Labs.
Static Mesh Skeletal Animation baking - new Labs tool ?
6824 4 1- rendereverything
- Member
- 47 posts
- Joined: March 2014
- Offline
- MaiAo
- Staff
- 99 posts
- Joined: Feb. 2021
- Offline
rendereverything
Hello. I recently discovered alternative approach to VAT (vertex animation textures) for skeletal meshes. It is already included in Unreal Engine in form of 3DsMax Script (StaticMeshSkeletalAnimation.ms) and material function ms_StaticMeshSkeletalAnimation.
It is explained a bit here: https://vimeo.com/266582237 [vimeo.com]
The idea is to bake a texture that holds bone assignments for each vertex. Then bake bone animation to separate textures.
This way animations can be shared between different meshes .
Also animation texture sizes are much smaller as it only requires 2 pixels per frame per bone to store.
This sounds like a great idea and it would be cool to have support for it in Labs.
We will do this in Labs VAT 4.
- rendereverything
- Member
- 47 posts
- Joined: March 2014
- Offline
Great! Meanwhile I created my own version (HDA + Unreal Material Functions). I still need to clean it up a bit and then I can share it.
How my tool works:
I decided to have just max 2 bone influences per point to keep things simple (Enough for most cases). My current use case is background crowd characters, based on unreal Niagara particles/mesh instances.
HDA bakes out 3 textures:
1. Per mesh point - RGB (Bone Index 1, Bone Weight 1, Bone Index 2) (weight 2 is just 1.0 - weight 1).. (This could also be stored in vertex color or additional UV channels).
2. and 3. Per bone, per animation frame - RGB (Bone Position in world space), RGBA (Quaternion - bone orientation delta from rest pose).
U direction = animation frames, V direction = bone number . (Texture size is bone number x frame count)
First frame is always skeleton rest pose. I sequence several animation clips together so I can then change active animation in shader and do transition blending.
Mesh is exported in rest pose and has bone index texture UV in UV2.
Benefits of this aproach:
1. really small animation textures.
2. Animations can be shared between meshes with similar skeletons - although skeletal proportions have to be very similar, because all bone position animation is baked as final transform (not local, because I don't evaluate bone hierarchy in shader).
This includes easy sharing between LOD levels with different topology (point positions).
Some problems:
It works fine, although I was dissapointed about different animation blending. I tried blending bone animation using quaternion slerp on orientations, but because bone positions are still interpolated linearly, there are quite bad final mesh deformations during blends. So just lerping final deformed points of the mesh gives better result.
It would be interesting to discuss this - maybe you have come up with better solution?
How my tool works:
I decided to have just max 2 bone influences per point to keep things simple (Enough for most cases). My current use case is background crowd characters, based on unreal Niagara particles/mesh instances.
HDA bakes out 3 textures:
1. Per mesh point - RGB (Bone Index 1, Bone Weight 1, Bone Index 2) (weight 2 is just 1.0 - weight 1).. (This could also be stored in vertex color or additional UV channels).
2. and 3. Per bone, per animation frame - RGB (Bone Position in world space), RGBA (Quaternion - bone orientation delta from rest pose).
U direction = animation frames, V direction = bone number . (Texture size is bone number x frame count)
First frame is always skeleton rest pose. I sequence several animation clips together so I can then change active animation in shader and do transition blending.
Mesh is exported in rest pose and has bone index texture UV in UV2.
Benefits of this aproach:
1. really small animation textures.
2. Animations can be shared between meshes with similar skeletons - although skeletal proportions have to be very similar, because all bone position animation is baked as final transform (not local, because I don't evaluate bone hierarchy in shader).
This includes easy sharing between LOD levels with different topology (point positions).
Some problems:
It works fine, although I was dissapointed about different animation blending. I tried blending bone animation using quaternion slerp on orientations, but because bone positions are still interpolated linearly, there are quite bad final mesh deformations during blends. So just lerping final deformed points of the mesh gives better result.
It would be interesting to discuss this - maybe you have come up with better solution?
Edited by rendereverything - May 23, 2022 14:16:29
- cocopops69
- Member
- 4 posts
- Joined: May 2018
- Online
- joeyongio
- Member
- 2 posts
- Joined: Feb. 2021
- Offline
-
- Quick Links