Camera switching in stage not working
8230 28 6- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
I'm trying to render a sequence with USD cameras, motion blur and using a switcher.
I kept rendering this and it failed to pickup the camera after the switch, sometimes the image is totally skewed and/or cropped, sometimes the wrong camera get picked up.
So I figured I'd throw a filecache in and render with that, and of course the filecache doesn't pick up the correct cameras either, but at least that's faster to debug than rendering.
But.. I'm still stumped.
Does anyone know how to get this to work? I am switching between the cameras using an editcontextoption node, but animating the switch with keyframes does not make a difference.
Would appreciate any and all help. I've tried for a long time now, am an intermediate USD user but feel free to reply as if I'm a right noob.
Thanks.
I kept rendering this and it failed to pickup the camera after the switch, sometimes the image is totally skewed and/or cropped, sometimes the wrong camera get picked up.
So I figured I'd throw a filecache in and render with that, and of course the filecache doesn't pick up the correct cameras either, but at least that's faster to debug than rendering.
But.. I'm still stumped.
Does anyone know how to get this to work? I am switching between the cameras using an editcontextoption node, but animating the switch with keyframes does not make a difference.
Would appreciate any and all help. I've tried for a long time now, am an intermediate USD user but feel free to reply as if I'm a right noob.
Thanks.
- jsmack
- Member
- 8039 posts
- Joined: 9月 2011
- Offline
- mtucker
- スタッフ
- 4521 posts
- Joined: 7月 2005
- Offline
A few questions...
1. What isn't working for you in this file? It seems to be doing what I'd expect...
2. Why are you controlling the switch with an edit context options LOP? There's nothing wrong with it, it just seems like an unnecessary additional node, when you can just animate the switch directly.
3. What is the purpose of the SWITCHING_TO_CACHE_DOES_NOT_WORK switch LOP? You can just put the filecache1 LOP there, and the "Load from Disk" toggle on that LOP effectively does the same thing the second switch is doing.
1. What isn't working for you in this file? It seems to be doing what I'd expect...
2. Why are you controlling the switch with an edit context options LOP? There's nothing wrong with it, it just seems like an unnecessary additional node, when you can just animate the switch directly.
3. What is the purpose of the SWITCHING_TO_CACHE_DOES_NOT_WORK switch LOP? You can just put the filecache1 LOP there, and the "Load from Disk" toggle on that LOP effectively does the same thing the second switch is doing.
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
jsmackRumbleMonk
Does anyone know how to get this to work? I am switching between the cameras using an editcontextoption node, but animating the switch with keyframes does not make a difference.
I wouldn't expect the switch to be animable. What are you trying to achieve with two cameras and a switch?
I have a large animated setup and would like to work out my cameras inside the same file, essentially I want to use something like a camera sequencer in stage.
The switch1 is animatable by the way, it just doesn't work for me when I either render the frames (rendered frames don't match Houdini viewport), or when I cache the filecache1 node.
To see what I mean you can try caching the filecache1 node, go to the second half of the timeline, set viewport to look through the correct camera (duh), then use the SWITCHING_TO_CACHE_DOES_NOT_WORK switch to compare the live camera setup with the cached camera setup. The cache does not pick up the second camera.
Btw cache1 and cache2, right after the cameras, are just there to get camera motion blur to work.
Edited by RumbleMonk - 2023年3月3日 08:47:39
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
mtucker
A few questions...
1. What isn't working for you in this file? It seems to be doing what I'd expect...
2. Why are you controlling the switch with an edit context options LOP? There's nothing wrong with it, it just seems like an unnecessary additional node, when you can just animate the switch directly.
3. What is the purpose of the SWITCHING_TO_CACHE_DOES_NOT_WORK switch LOP? You can just put the filecache1 LOP there, and the "Load from Disk" toggle on that LOP effectively does the same thing the second switch is doing.
Thanks for looking into this!
1. The switch works, until I render it. If you run the filecache, then use the switch to check the cache against the 'live' setup you'll see the cached output does not pick up the second camera. The live setup works only in Houdini viewport. Even rendering Karma in the viewport is fine. Rendering to disk does not pick up the second camera, nor does the filecache1 (after you've cached it, obviously ).
2. I tried animating the switch, same problem. I'm not married to a specific solution, I just want a sequencer type USD camera setup with motion blur rendering correctly in stage.
3. The purpose of that switch is for troubleshooting, so it's easy to compare the live setup vs the busted filecache1, which of course I'm expecting to be the same. And again, first camera looks fine, second camera does not get picked up by the cache, which is also not picked up when rendering to disk.
Thanks again.
Edited by RumbleMonk - 2023年3月3日 08:52:13
- robp_sidefx
- スタッフ
- 499 posts
- Joined: 6月 2020
- Offline
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
robp_sidefx
I can reproduce what you're seeing, will investigate!
Hi again Rob,
If this will be a confirmed bug, would the simplest way to do this to break out a bunch of ROPs instead with merges of each camera leading into each ROP? I'd also of course have to set the frame range per ROP.
I have a 4min long sim with multiple cameras and 20 or so camera cuts. I was hoping to just render it all in one go, so hoping you have a solution still. I also hope this mentioned alternative solution isn't the easiest one.
Thanks.
- robp_sidefx
- スタッフ
- 499 posts
- Joined: 6月 2020
- Offline
TL;DR - I've attached a very lightly modified version of the scene file which should work. The important change was adding a Configure Layer LOP with its Save Path set. Less important, I changed the Cache LOPs to be Current Frame Only and enabled Subframe Sampling.
The issue was somewhat subtle, and definitely unintuitive at first glance. I'll provide some background and then link it to your scene.
USD doesn't really support transient layers (though arguably Value Clips offers this), either a layer is always there or never there. There are, of course, APIs to add/remove layers, but if we consider a standalone USD file/cache, then layers are omnipresent. Further, when layers are composed, time samples are always overridden (not merged) by stronger layers. So if LayerA provides data for t=1,2,3 and LayerB provides data for t=4,5,6, then a USD stage with both layers will either expose LayerA's data or LayerB's data (depending on which is stronger), but you can't get the data from both.
So what does that have to do with your scene, and how does the modified scene work?
You were off to a good start by having each of the Camera LOPs author the same primitive path; that's definitely a critical element. The problem is that each LOP is authoring its own USD layer. You can see this if you add a Cache LOP set to Cache All Frames and look at the Scene Graph Layers pane.
As a result, any of our caching LOPs (such as the File Cache) will only ever end up exposing data from one of the cameras.
To deal with this, we need these caching LOPs to identify both layers (i.e., both sets of camera data) as effectively being the same thing. If you consider a single animated camera in LOPs, for example, on each frame we still end up generating a different USD layer, but the Cache LOP is able to recognise them as the same and merges/concatenates their samples rather than layering them. The "magic" here is the layer's save path. When we (Houdini) combine two layer stacks during caching, we compare save paths to find correspondences.
So the solution here is, somewhere downstream from where all the cameras join up (e.g., immediately after the Switch LOP), to set a common save path (e.g., "cameras.usd"). Now any downstream caching will, from frame-to-frame, treat these layers as representing different time samples of the same entity and combine them as desired.
There are a variety of ways this can be achieved (a diamond pattern to the LOP network, certain layer/stage flattening operations, the Merge and Graft LOPs, etc), but fundamentally what you need to end up with is:
* consistent camera primitive naming (you already had this)
* consistent USD layer naming (the missing piece)
Hope this helps! Feel free to follow up if you're still observing strange behaviour.
- Rob
The issue was somewhat subtle, and definitely unintuitive at first glance. I'll provide some background and then link it to your scene.
USD doesn't really support transient layers (though arguably Value Clips offers this), either a layer is always there or never there. There are, of course, APIs to add/remove layers, but if we consider a standalone USD file/cache, then layers are omnipresent. Further, when layers are composed, time samples are always overridden (not merged) by stronger layers. So if LayerA provides data for t=1,2,3 and LayerB provides data for t=4,5,6, then a USD stage with both layers will either expose LayerA's data or LayerB's data (depending on which is stronger), but you can't get the data from both.
So what does that have to do with your scene, and how does the modified scene work?
You were off to a good start by having each of the Camera LOPs author the same primitive path; that's definitely a critical element. The problem is that each LOP is authoring its own USD layer. You can see this if you add a Cache LOP set to Cache All Frames and look at the Scene Graph Layers pane.
As a result, any of our caching LOPs (such as the File Cache) will only ever end up exposing data from one of the cameras.
To deal with this, we need these caching LOPs to identify both layers (i.e., both sets of camera data) as effectively being the same thing. If you consider a single animated camera in LOPs, for example, on each frame we still end up generating a different USD layer, but the Cache LOP is able to recognise them as the same and merges/concatenates their samples rather than layering them. The "magic" here is the layer's save path. When we (Houdini) combine two layer stacks during caching, we compare save paths to find correspondences.
So the solution here is, somewhere downstream from where all the cameras join up (e.g., immediately after the Switch LOP), to set a common save path (e.g., "cameras.usd"). Now any downstream caching will, from frame-to-frame, treat these layers as representing different time samples of the same entity and combine them as desired.
There are a variety of ways this can be achieved (a diamond pattern to the LOP network, certain layer/stage flattening operations, the Merge and Graft LOPs, etc), but fundamentally what you need to end up with is:
* consistent camera primitive naming (you already had this)
* consistent USD layer naming (the missing piece)
Hope this helps! Feel free to follow up if you're still observing strange behaviour.
- Rob
Edited by robp_sidefx - 2023年3月7日 07:55:46
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
Thanks so much!
This seems to be a very simple fix indeed. I think I understand most of this, and the concept of stronger layers explains why this sometimes worked and sometimes did not (at least while rendering without motion blur) as I guess some camera layers just "naturally" overrode the previous camera layer during a camera cut. I have one question though:
You say "When we (Houdini) combine two layer stacks during caching, we compare save paths to find correspondences". I'm wondering how Houdini then figures out the correct motion blur samples for frame 26, which is the first frame of the second camera.
That second camera would look at 25.75 and 26.25 for the motion blur samples, but if the two camera layers are treated "as representing different time samples of the same entity and combine them as desired" why does Houdini not grab the transform of frame 25.75 from camera1 and the transform of frame 26.25 from camera2, and essentially render a very wrong motion blur on frame 26 as a result?
Big thanks again!
This seems to be a very simple fix indeed. I think I understand most of this, and the concept of stronger layers explains why this sometimes worked and sometimes did not (at least while rendering without motion blur) as I guess some camera layers just "naturally" overrode the previous camera layer during a camera cut. I have one question though:
You say "When we (Houdini) combine two layer stacks during caching, we compare save paths to find correspondences". I'm wondering how Houdini then figures out the correct motion blur samples for frame 26, which is the first frame of the second camera.
That second camera would look at 25.75 and 26.25 for the motion blur samples, but if the two camera layers are treated "as representing different time samples of the same entity and combine them as desired" why does Houdini not grab the transform of frame 25.75 from camera1 and the transform of frame 26.25 from camera2, and essentially render a very wrong motion blur on frame 26 as a result?
Big thanks again!
- robp_sidefx
- スタッフ
- 499 posts
- Joined: 6月 2020
- Offline
RumbleMonk
I'm wondering how Houdini then figures out the correct motion blur samples for frame 26, which is the first frame of the second camera. That second camera would look at 25.75 and 26.25 for the motion blur samples, but if the two camera layers are treated "as representing different time samples of the same entity and combine them as desired" why does Houdini not grab the transform of frame 25.75 from camera1 and the transform of frame 26.25 from camera2, and essentially render a very wrong motion blur on frame 26 as a result?
That's an excellent question, and ties in with the change I made to the Cache LOPs in your scene (specifically setting it to "Cache Current Frame Only" and, more importantly, enabling "Subframe Sampling"). With the LOPs set up like that, when we're on frame N we will cache time samples for N-0.25, N, N+0.25. This means on frame 25, we'll get the left camera generating 24.75, 25.00, 25.25. On frame 26 we'll get the right camera generating 25.75, 26.00, 26.25.
Note that this works because the Cache LOPs come *before* the Switch LOP. On frame 26, the Switch only considers the right input. Upstream to the right, the Cache LOP can do as it sees fit (i.e., cook & cache at any time), which is how it is this right camera that ends up generating the 25.75 time sample. If the Cache came *after* the Switch LOP, then when it tried to cache 25.75 the Switch would divert it to the left and then for 26.25 to the right. You (probably) could work around this by setting the Switch's transition time to be 25.5 (rather than 26).
Regardless how you setup the caching, it's definitely important to have samples that bracket the range of the camera shutter. By this I mean if you changed your camera's shutter from (-0.25,0.25) to (-0.5,0.0) to slightly offset the motion in time, the renderer would query USD for time data at 25.5 which, since it wasn't explicitly cached, would be interpolated by blending the 25.25 and 25.75 samples - giving you "a very wrong motion blur" indeed.
Does that make sense?
Edited by robp_sidefx - 2023年3月9日 03:48:32
- No_ha
- Member
- 123 posts
- Joined: 9月 2018
- Offline
robp_sidefx
USD doesn't really support transient layers (though arguably Value Clips offers this), either a layer is always there or never there. There are, of course, APIs to add/remove layers, but if we consider a standalone USD file/cache, then layers are omnipresent.
This is something that I run into quite often. A recent example was instancing geo that grows (and starts with zero points). I was confused why it would not instance correctly after writing it out to disk until I checked the scene graph tree and saw the prototypes vanishing on the first few frames.
I'm wondering if a good QoL feature would be a checkbox that forces a prim to exist? I ended up having to shift the animation so every prototype instance starts with at least one point, but this might not be a solution that's always possible (or at least not always easily possible).
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
robp_sidefxRumbleMonk
I'm wondering how Houdini then figures out the correct motion blur samples for frame 26, which is the first frame of the second camera. That second camera would look at 25.75 and 26.25 for the motion blur samples, but if the two camera layers are treated "as representing different time samples of the same entity and combine them as desired" why does Houdini not grab the transform of frame 25.75 from camera1 and the transform of frame 26.25 from camera2, and essentially render a very wrong motion blur on frame 26 as a result?
That's an excellent question, and ties in with the change I made to the Cache LOPs in your scene (specifically setting it to "Cache Current Frame Only" and, more importantly, enabling "Subframe Sampling"). With the LOPs set up like that, when we're on frame N we will cache time samples for N-0.25, N, N+0.25. This means on frame 25, we'll get the left camera generating 24.75, 25.00, 25.25. On frame 26 we'll get the right camera generating 25.75, 26.00, 26.25.
Note that this works because the Cache LOPs come *before* the Switch LOP. On frame 26, the Switch only considers the right input. Upstream to the right, the Cache LOP can do as it sees fit (i.e., cook & cache at any time), which is how it is this right camera that ends up generating the 25.75 time sample. If the Cache came *after* the Switch LOP, then when it tried to cache 25.75 the Switch would divert it to the left and then for 26.25 to the right. You (probably) could work around this by setting the Switch's transition time to be 25.5 (rather than 26).
Regardless how you setup the caching, it's definitely important to have samples that bracket the range of the camera shutter. By this I mean if you changed your camera's shutter from (-0.25,0.25) to (-0.5,0.0) to slightly offset the motion in time, the renderer would query USD for time data at 25.5 which, since it wasn't explicitly cached, would be interpolated by blending the 25.25 and 25.75 samples - giving you "a very wrong motion blur" indeed.
Does that make sense?
This makes plenty of sense, absolutely. Thanks once again Rob. Awesome.
Alright, maybe last question - I moved my cameras from my massive file into this one and set caching etc as per your file. I even brought in the cameras only into your file to make sure I hadn't missed anything but while the cameras switch correctly they seem to be zooming in differently when comparing the live cam data to the cached cam data.
I've attached 2 images showing frame 700 after caching the camera, then switching it on and off. Some cameras seem to be ok, some less so.
It looks like some sort of film back issue, but that's not part of the cameras getting cached.. or maybe the focal length doesn't get looked at in the same way as the transforms inside the configurelayer node? One step closer, would love to hear your take on this one as well if you have a moment.
Your help is extremely appreciated by the way, I've spent days trying to figure this out.
Cheers!
- robp_sidefx
- スタッフ
- 499 posts
- Joined: 6月 2020
- Offline
RumbleMonk
Alright, maybe last question - I moved my cameras from my massive file into this one and set caching etc as per your file. I even brought in the cameras only into your file to make sure I hadn't missed anything but while the cameras switch correctly they seem to be zooming in differently when comparing the live cam data to the cached cam data.
I've attached 2 images showing frame 700 after caching the camera, then switching it on and off. Some cameras seem to be ok, some less so.
Yes ... so there's one more "gotcha": when we cache USD stages, we can only combine *time samples*. If you only author default values , we don't convert these to time samples (so as to keep the USD size & processing cost low), but this means if you have a time-varying default value, bad things will happen.
In your scene, I see that the camera's aperture and clipping planes aren't consistent from one camera to the next.
I'm looking into more elegant solutions to deal with these workflows but, in the interim, you can do something to "trick" Houdini by changing, for example, your horizontal aperture from "41.345" to "41.345 + $F*0". This will evaluate to the same value, but will force Houdini to treat this as animated, and generate time samples that can then be stitched together.
Note you'll have to do this on all the cameras. Every attribute that differs between the cameras *must* be represented using USD time samples for the caching to merge them.
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
robp_sidefxRumbleMonk
Alright, maybe last question - I moved my cameras from my massive file into this one and set caching etc as per your file. I even brought in the cameras only into your file to make sure I hadn't missed anything but while the cameras switch correctly they seem to be zooming in differently when comparing the live cam data to the cached cam data.
I've attached 2 images showing frame 700 after caching the camera, then switching it on and off. Some cameras seem to be ok, some less so.
Yes ... so there's one more "gotcha": when we cache USD stages, we can only combine *time samples*. If you only author default values , we don't convert these to time samples (so as to keep the USD size & processing cost low), but this means if you have a time-varying default value, bad things will happen.
In your scene, I see that the camera's aperture and clipping planes aren't consistent from one camera to the next.
I'm looking into more elegant solutions to deal with these workflows but, in the interim, you can do something to "trick" Houdini by changing, for example, your horizontal aperture from "41.345" to "41.345 + $F*0". This will evaluate to the same value, but will force Houdini to treat this as animated, and generate time samples that can then be stitched together.
Note you'll have to do this on all the cameras. Every attribute that differs between the cameras *must* be represented using USD time samples for the caching to merge them.
Ah! I'm honestly not even sure why I did that but I'm glad I did as it gave me this valuable additional info. I figured this setup would have its little pitfalls but think you sorted them all out for me. I'll either do just this (checked, works) or setup the cameras with the same values unless animated.
You probably have more elegant solutions but making sure all static values are re-evaluated upstream if the switch input has changed would be a welcome fix.
Big thanks!
Edited by RumbleMonk - 2023年3月10日 17:40:54
- leoYfver
- Member
- 31 posts
- Joined: 7月 2015
- Offline
Thank you both for this thread, its been very informative and being able to switch cameras and bake the animation to the usd is something we have been looking into also.
Though I hit a bit of a snag. It seem like my usd file is fine but when rendering through husk i dont get animated cameras.
1. Our cameras have value clips as animation
2. I switch the cameras and force them to be timesampled to get baked into the usd.
3. I export the usd and all look correct in both solaris and usdview
4. I render through husk and i dont get an animated camera
Is this just a limitation or could it be an issue with husk?
Regards Alexis
Though I hit a bit of a snag. It seem like my usd file is fine but when rendering through husk i dont get animated cameras.
1. Our cameras have value clips as animation
2. I switch the cameras and force them to be timesampled to get baked into the usd.
3. I export the usd and all look correct in both solaris and usdview
4. I render through husk and i dont get an animated camera
Is this just a limitation or could it be an issue with husk?
Regards Alexis
cg supervisor @goodbyekansas
- RumbleMonk
- Member
- 17 posts
- Joined: 9月 2009
- Offline
leoYfver
Thank you both for this thread, its been very informative and being able to switch cameras and bake the animation to the usd is something we have been looking into also.
I'll check back to see if you get anywhere with this, I'm curious how your setup works and maybe by sheer luck I'd be able to help if I get time to look into it.
One more note regarding my setup, I had some transforms acting as parents to some cameras. Those had the default $OS description which will cause problems. They'll need to have their names set to the same, and I believe that was the last puzzle piece to get it to work.
- robp_sidefx
- スタッフ
- 499 posts
- Joined: 6月 2020
- Offline
leoYfver
I hit a bit of a snag. It seem like my usd file is fine but when rendering through husk i dont get animated cameras.
Hey Alexis, I can't reproduce what you're seeing. I get three frames each from a different camera as-expected. Can you send the final exported USD file and confirm the version of Houdini/husk?
- Goodbye_Kansas
- Member
- 35 posts
- Joined: 2月 2023
- Offline
- robp_sidefx
- スタッフ
- 499 posts
- Joined: 6月 2020
- Offline
Goodbye_Kansas
I must have messed up the scene. It only works if the attributes have value clip on them, seems like it maybe is default now and that why it works? Will take a look at the scene again. Its houdini 19.5.534 im doing my tests in.
If you continue to have problems, you know where to find us
-
- Quick Links