You can set your ‘Cache Mode’ parm to ‘Read’ or ‘Automatic’ and re-cook your node. It should pick up the expected output and all your items as cooked right away.
However, what happens on a longer render when it's overnight? EG we have scripts or people that will fix the issue on the farm but wouldn't have access to the PDG graph directly to manually Recook like that.
Any way to have it auto-recheck every 30 seconds or something?
Cheers,
Peter B
Cheers,
Peter Bowmar ____________ Houdini 20.5.262 Win 10 Py 3.11
When work items fail there's no way to make PDG try them again except by stopping the cook and starting a new cook. But I think it would be a good RFE to have a mechanism for automatic retries during a cook.
That would point back to the beta-era request to have TOPs able to continually attempt to solve network, perhaps? ie. this seems like a Re-Run Until Done variation on some kind of Re-Run Continually behaviour.
This ability would be useful. So far I have been using pdg in interactive sessions, so failed frames are fine in that scenario if you just resubmit something that is fast to execute. But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.
https://openfirehawk.com/ [openfirehawk.com] Support Open Firehawk - An open source cloud rendering project for Houdini on Patreon. This project's goal is to provide an open source framework for cloud computing for heavy FX based workflows and allows end users to pay the lowest possible price for cloud resources.
Andrew Graham But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.
FYI Local Scheduler now has ‘Exit code handling’ which can be used to retry, and Hqueue Scheduler has a ‘retries’ job parameter that can be set.
That's good to know. So with hqueue - would it bail out on a sim if other tasks downstream are failing or would that sim be safe to finish? It would be great to see this in Deadline too if it isn't already there.
https://openfirehawk.com/ [openfirehawk.com] Support Open Firehawk - An open source cloud rendering project for Houdini on Patreon. This project's goal is to provide an open source framework for cloud computing for heavy FX based workflows and allows end users to pay the lowest possible price for cloud resources.
Yes, for example if a partition contains a sim and other work items that fail before the sim is finished, the cook will carry on until all ready items are finished.
chrisgreb You can set your ‘Cache Mode’ parm to ‘Read’ or ‘Automatic’ and re-cook your node. It should pick up the expected output and all your items as cooked right away.
What if we are using ROP Alembic? Like when we have a heavy mesh been exported in a single file, where some frames are failing?
A workaround would be to export an alembic sequence… But I've no sure if there is a way to merge them together later or if we have to create another task just for this.
chrisgreb FYI Local Scheduler now has ‘Exit code handling’ which can be used to retry, and Hqueue Scheduler has a ‘retries’ job parameter that can be set.