Hello all !
I have a project where i scatter a lot of trees and vegetations on a large heightfield map.
Each asset has 3 level of LODs with polyreduce and textures reduced in their mats.
All of those are instanced using redshift point cloud.
A given adult tree go from 14mo LOD0 to 510 ko LOD2 > fantastic !
.. those will be instanced a million+ times so pushing optimisation is key
i'd like to optimise more or try to push the limits :
1- For the last LOD : Is it more efficient to have a few hundred polys than just a plan with textures on it ?
Atlas VS Billboard i guess.
2 If so, then i'm wondering how to convert
a model + textures (where there is a few hundred polys),
to
2 textured plans at 180° angle.
I was wondering if like in camera projection we could extract a parallel view of a cam pointing at the object (tree),
extracting the textures then applying the “rendered” image to a plan of the same ratio as of the cam.
but i'm facing a problem with the lightning, as textures have to not being affected by light during this process.
May be there is a workflow for this i've missed in the SideFx Labs ?
Thanks a lot !