COP's / Copernicus - caching for live playback!?

   1202   13   2
User Avatar
Member
22 posts
Joined: 12月 2019
Offline
Hello, I'm pretty much brand new to COP's giving it a try for my compositing.
And I just can't figure out how to cache my entire timeline / video, so I can live playback my composition by pressing play?

I read about it in the documentation, but I just can't find any meaningful info on it.

If anyone knows how to do this, please let me know.

Thank you very much for your time and help, I really appreciate it.


PS. Dear Houdini Team, this is super unintuitive! You should be caching my timeline in the background by default, so I can press play and actually see my composited video.
Edited by Yannik - 2024年7月26日 06:10:31
User Avatar
Member
7981 posts
Joined: 9月 2011
Offline
I'm not sure what you mean, does hitting the play button not work for live playback?
User Avatar
Member
22 posts
Joined: 12月 2019
Offline
jsmack
I'm not sure what you mean, does hitting the play button not work for live playback?
No hitting the play button will either play at like 3fps or skip frames dependent on the selected realtime playback mode.

As it needs to read multiple .exr files and run various filters for each frame.
I need it to somehow process all the frames, store the processed information, and then play it back to me in live time.
User Avatar
Member
7981 posts
Joined: 9月 2011
Offline
oh, you want to render it out. Use the rop image output node to write an image sequence.
User Avatar
Member
22 posts
Joined: 12月 2019
Offline
jsmack
oh, you want to render it out. Use the rop image output node to write an image sequence.
I don't want to render everything out to disk and have to read it back somewhere just to see what I did in a fluent video, make a change, and then repeat.
I want to be able to just live playback the animation in the viewport by pressing play.
As it stands, the playback performance is not sufficient to play every frame at real time, so I want Houdini to cache the playback.
I can do it with MPlay, I can do it for simulations, I just can't figure out how to do it in COP's.

Node info as well as cache manager both show details for cooking cop networks, I just can't get it to work or figure out the workflow.
User Avatar
Member
7981 posts
Joined: 9月 2011
Offline
Yannik
jsmack
oh, you want to render it out. Use the rop image output node to write an image sequence.
I don't want to render everything out to disk and have to read it back somewhere just to see what I did in a fluent video, make a change, and then repeat.
I want to be able to just live playback the animation in the viewport by pressing play.
As it stands, the playback performance is not sufficient to play every frame at real time, so I want Houdini to cache the playback.
I can do it with MPlay, I can do it for simulations, I just can't figure out how to do it in COP's.

Node info as well as cache manager both show details for cooking cop networks, I just can't get it to work or figure out the workflow.

I don't think there is any cache mechanism for cops, the idea is you make a network fast enough that it doesn't need one. The Old cops had caching and was hard coded to assume it is working with sequences. The new cops is just sops, but for images, so caching works the same as in SOPs-only the current frame is cached. I think you can still render to mplay rather than to disk from the rop image output node. If viewing the cop network in the scene viewer, the flipbook tool will work.
User Avatar
Member
46 posts
Joined: 7月 2009
Offline
If copernicus should be considered for compositing tasks it needs some sort of caching..linke dops have.

With the current speed of loading exrs it can never be realtime without caching. Simple loading of productions exr's (sequences) is soo extrem slow right now its unuseable.
Edited by louisx - 2024年7月26日 01:24:34
User Avatar
Member
529 posts
Joined: 8月 2019
Offline
jsmack
the idea is you make a network fast enough that it doesn't need one

The idea is sipmly bad. Copernicus is very slow. None of the cool things they showed in demo can be realistically cooked in realtime on a consumer level PC.
Edited by raincole - 2024年7月26日 06:44:28
User Avatar
Member
7981 posts
Joined: 9月 2011
Offline
louisx
If copernicus should be considered for compositing tasks it needs some sort of caching..linke dops have.

With the current speed of loading exrs it can never be realtime without caching. Simple loading of productions exr's (sequences) is soo extrem slow right now its unuseable.

Well, it should not be considered for that, the devs have stated that is not the goal for this release.

raincole
jsmack
the idea is you make a network fast enough that it doesn't need one

The idea is sipmly bad. Copernicus is very slow. None of the cool things they showed in demo can be realistically cooked in realtime on a consumer level PC.

I don't know about you but I've had plenty of setups that run at over 100fps, and at 30 fps at 4k.
User Avatar
スタッフ
514 posts
Joined: 8月 2019
Offline
If you're reading exr files, there is a known issue that causes this to be very slow. We're looking into it. In the mean time, you may be able to get a significant performance boost if you can use a different file format.
User Avatar
Member
22 posts
Joined: 12月 2019
Offline
johnmather
If you're reading .exr files, there is a known issue that causes this to be very slow. We're looking into it. In the mean time, you may be able to get a significant performance boost if you can use a different file format.
Thanks for the heads up. I really appreciate it.

I'm personally only familiar with .exr based workflows using the various denoised .exr outputs RenderMan delivers for compositing. As read back times with .exr files are never great and can hardly be done live for complex composites,
it would be a bare minimum to see some sort of cache features - which at least cache files we are reading from disk.
User Avatar
Member
199 posts
Joined: 1月 2014
Offline
+1 I'm surprised with that answer (the idea is you make a network fast enough that it doesn't need one)
It's pretty slow on my PC as well and I have it on the higher end.
I contribute to the beauty of this world
https://houdininotes.ivanlarinin.com/ [houdininotes.ivanlarinin.com]
User Avatar
Member
22 posts
Joined: 12月 2019
Offline
This should have been one of the first things they added!
It just isn't a viable approach to try and read back files from disk without caching.
No matter how well optimized the code is, if the disk speed cant keep up with the file size - The playback will inevitably stutter.

There is a lot of merit in making it fast enough to be able to look dev interactively.
If I can add things like glow to my progressive renderview, I will be able to see and work with what I really get as final output. Which is a massive time saver and huge advantage.

So I sincerely hope that they get this sorted out soon so I can actually also comp animations properly.
Edited by Yannik - 2024年7月27日 18:10:39
User Avatar
Member
8723 posts
Joined: 7月 2007
Offline
Its been specifically pointed out and reiterated many times that the first iteration of new COPs is not yet aiming for compositing even though it definitely can be used for simple ones

All the official examples were mostly for texture generation and some post filters, not any serious comp work or even realtime playback

So I personally wouldnt hold it against SideFX that they decided to squeeze the COPs into the release in the current iteration rather than holding off for it to have all the features that everyone is used to from all the softwares that it has potential to replace

I believe they are fully aware of missing features regarding future compositing focus and will continue to implement them, of course RFEs never hurt just to be sure they are aware of your particular needs

Since talking about compositing there is many more features missing beyond caching, so no wonder its not been advertised as a compositing network yet and will require some patience (like time manipulation, intuitive display/data window handling, multi layer wires, roto, tracking, etc... Or even still some for texture manipulation like UDIM workflows, which may also depend on some multilayer tech etc...)
Tomas Slancik
FX Supervisor
Method Studios, NY
  • Quick Links