Jannis Piek
Luxalpa
About Me
Connect
LOCATION
Not Specified
ウェブサイト
Houdini Engine
Availability
Not Specified
Recent Forum Posts
Converting textures from sRGB to ACEScg not displaying right 2021年6月23日8:30
I was actually able to solve this issue by using these settings:
I got a 100% color match. Yay!
I got a 100% color match. Yay!
Converting textures from sRGB to ACEScg not displaying right 2021年6月23日6:33
Hello, for a couple days now I've been trying to get the output from Substance Painter to match what I see in the viewport in Houdini to no avail
The issue is that the image always looks slightly too dark.
Here is the result of using a OCIO Transform on a non-linearized imported sRGB image (PNG) on the left, and on the right the original image is simply displayed in MPlay with Gamma 2.2:
I tried using different methods, such as converting the image using PYCO ColorSpace Converter and linearizing the image, then transforming from Utility Linear sRGB. Either of these methods yield the exact same result. Displaying the linearized image only without converting it to ACEScg (of course) results in a slightly different (and still terribly wrong) result.
Is my ACES set up wrong? I noticed that when exporting ACEScg images from Substance Designer to Houdini, they match 100% exactly, so I find it unlikely that ACES is the issue, but I have no idea.
Maybe what I want is simply not possible? Can sRGB images not be converted into ACEScg color space? Why not?
The issue is that the image always looks slightly too dark.
Here is the result of using a OCIO Transform on a non-linearized imported sRGB image (PNG) on the left, and on the right the original image is simply displayed in MPlay with Gamma 2.2:
I tried using different methods, such as converting the image using PYCO ColorSpace Converter and linearizing the image, then transforming from Utility Linear sRGB. Either of these methods yield the exact same result. Displaying the linearized image only without converting it to ACEScg (of course) results in a slightly different (and still terribly wrong) result.
Is my ACES set up wrong? I noticed that when exporting ACEScg images from Substance Designer to Houdini, they match 100% exactly, so I find it unlikely that ACES is the issue, but I have no idea.
Maybe what I want is simply not possible? Can sRGB images not be converted into ACEScg color space? Why not?
Is it even possible to bake and display displacement maps? 2021年5月22日9:22
Hello,
I've been spending a few hundred hours now trying to get displacement maps to work and am very close to giving up. As far as I understand, when baking displacement maps from high poly object onto a low poly object, it will automatically assume that the low poly object won't be tesselated (which is stupid of course, because displacement maps only work with tesselated meshes). I tried virtually every texture baking tool on the market, and while they all have their issues, one problem is in common:
For example, when using Labs Maps Baker, it will create this grid-like pattern on the displacement map. That in itself wouldn't be a problem if there was a way to tell mantra (or any other render or real time engine) to avoid smoothing on subdivision. However, unfortunately that's impossible. So Mantra will happily add the displacement on top of the subdivided geometry, effectively resulting in these bubbles:
Now, there's work arounds for this issue, which are all creating massive artifacts / seams:
Since Mantras "Render Polygons as Subdivisions" is adaptive towards pixels, it's actually unknowable (I think) which level of subdivision I have to use anyway. And since all of the subdivision modes available alter all preexisting points, different levels of subdivision appear to result in slightly different geometry (and possibly UVs). An alternative approach would be to apply the displacement in shaders but I have no way of knowing how to do this (I'd need to somehow find out the difference in position of the current input geo vs the original input geo before subdivision) and quite frankly, I am not sure whether this will really help me when exporting my mesh to Mari in order to add in more Displacement Details.
Which leads me to the conclusion: The entire displacement work seems to be pre-alpha at best, and I am not talking about Houdini, I am talking about the VFX industry as a whole. I've tried creating displacement maps in ZBrush, Substance Designer and Mighty Bake as well, which all resulted in faulty displacements (shoutout to Substance Designer here which can not even properly display their own displacement maps - Amazing software!). Is this even a workflow that can be done for characters? Should it be done? Now that I think I know pretty much all there is to know about Displacement maps, I'm wondering if this is even a solvable problem. It seems they are virtually unsupported in all softwares even though most of them have some kind of "support", but then don't actually support it.
I'm using Nearest projection to my mesh as this is the only possible projection since my low res overlaps everywhere with my highres as it is just a retopologized version of it. I am unable to use Cage Baking or Surface Normals baking. I tried UV Match once, but it's also broken as it's using the broken Bake Texture ROP (And the ZBrush variant creates inflated geometry).
Should I abandon this workflow? The information on the internet seems to suggest that Displacement Maps are kind of standard in the VFX industry, but due to the fact that they are pretty much entirely unsupported that seems very hard for me to believe. I feel like my approach to this may be wrong, but everyone I asked also couldn't find out more.
Is there a reasonable solution for this problem or is it just impossible to do?
I've been spending a few hundred hours now trying to get displacement maps to work and am very close to giving up. As far as I understand, when baking displacement maps from high poly object onto a low poly object, it will automatically assume that the low poly object won't be tesselated (which is stupid of course, because displacement maps only work with tesselated meshes). I tried virtually every texture baking tool on the market, and while they all have their issues, one problem is in common:
For example, when using Labs Maps Baker, it will create this grid-like pattern on the displacement map. That in itself wouldn't be a problem if there was a way to tell mantra (or any other render or real time engine) to avoid smoothing on subdivision. However, unfortunately that's impossible. So Mantra will happily add the displacement on top of the subdivided geometry, effectively resulting in these bubbles:
Now, there's work arounds for this issue, which are all creating massive artifacts / seams:
- I can use the Bake Texture ROP and enable "Render Polygons as Subdivisions (Mantra)" on the low res before baking. This removes the seams, but in return I get tons of baking errors on the mesh. I can play around with the Ray Bias and Ray Max Distance settings, but unfortunately they just move the error areas somewhere else and for example on the wings of my creature it was straight up impossible to remove these artifacts no matter which settings I used.
- I tried using Bake Texture to only bake out the difference between the low res and the low res with the subdivisions flag set, but even then, low res to low res, I get a huge amount of artifacts. It's straight up unusable.
- I tried using the Labs Maps Baker on a manually subdivided low poly. This worked reasonably well, however I get seams at UV borders:
- There's the option to manually add in creasing (for example via creaseweight attribute on all vertices) or apply the displacements on a mesh that is subdivided using OpenSubdiv Billinear. However all of these methods leave artifacts on the polygon borders.
Since Mantras "Render Polygons as Subdivisions" is adaptive towards pixels, it's actually unknowable (I think) which level of subdivision I have to use anyway. And since all of the subdivision modes available alter all preexisting points, different levels of subdivision appear to result in slightly different geometry (and possibly UVs). An alternative approach would be to apply the displacement in shaders but I have no way of knowing how to do this (I'd need to somehow find out the difference in position of the current input geo vs the original input geo before subdivision) and quite frankly, I am not sure whether this will really help me when exporting my mesh to Mari in order to add in more Displacement Details.
Which leads me to the conclusion: The entire displacement work seems to be pre-alpha at best, and I am not talking about Houdini, I am talking about the VFX industry as a whole. I've tried creating displacement maps in ZBrush, Substance Designer and Mighty Bake as well, which all resulted in faulty displacements (shoutout to Substance Designer here which can not even properly display their own displacement maps - Amazing software!). Is this even a workflow that can be done for characters? Should it be done? Now that I think I know pretty much all there is to know about Displacement maps, I'm wondering if this is even a solvable problem. It seems they are virtually unsupported in all softwares even though most of them have some kind of "support", but then don't actually support it.
I'm using Nearest projection to my mesh as this is the only possible projection since my low res overlaps everywhere with my highres as it is just a retopologized version of it. I am unable to use Cage Baking or Surface Normals baking. I tried UV Match once, but it's also broken as it's using the broken Bake Texture ROP (And the ZBrush variant creates inflated geometry).
Should I abandon this workflow? The information on the internet seems to suggest that Displacement Maps are kind of standard in the VFX industry, but due to the fact that they are pretty much entirely unsupported that seems very hard for me to believe. I feel like my approach to this may be wrong, but everyone I asked also couldn't find out more.
Is there a reasonable solution for this problem or is it just impossible to do?