Doing sequencial execution on parallel tasks
4317 8 1- romainubi
- Member
- 5 posts
- Joined: 6月 2018
- Offline
Hi !,
I am trying to automate something. I am at a point where I have X assets, and for each X assets I have a variable amount of task to be done sequencially(example: aggregating objects in scene for a given asset).
My first try was to add internal dependencies on the work items using a python processor, so I had dependencies between the work items of the same asset, then pass the work items to a rop fetch. But the dependencies seem to be lost when they get to the downstream node.
Now I am looking into using the feedback loop but cannot get it to work the way I want.
I have created partitions in a way that each partition cannot have two tasks of the same asset. I feed that to a feedback loop where I have a work item expand node, then my rop fetch. Unfortunately, it seems that the feedback “untags” the upstreams workitems as being partitions making the work item expansion fails.
Any idea how to solve my problem ?
I hope my post is not too messy…
Thanks a lot !
I am trying to automate something. I am at a point where I have X assets, and for each X assets I have a variable amount of task to be done sequencially(example: aggregating objects in scene for a given asset).
My first try was to add internal dependencies on the work items using a python processor, so I had dependencies between the work items of the same asset, then pass the work items to a rop fetch. But the dependencies seem to be lost when they get to the downstream node.
Now I am looking into using the feedback loop but cannot get it to work the way I want.
I have created partitions in a way that each partition cannot have two tasks of the same asset. I feed that to a feedback loop where I have a work item expand node, then my rop fetch. Unfortunately, it seems that the feedback “untags” the upstreams workitems as being partitions making the work item expansion fails.
Any idea how to solve my problem ?
I hope my post is not too messy…
Thanks a lot !
- kenxu
- Member
- 544 posts
- Joined: 9月 2012
- Offline
Hi there,
Yes, feedback loops are likely the construct you are looking for to solve your problem. However, there are some gotchas with them that we have not yet explained well (but I'm in the process of making another master class to better explain it):
1. Dynamic partitioning is not supported feedback loops currently.
2. Only actual outputs are capable of being fed back to the next loop iteration. Regular attributes cannot currently feedback.
There are work-arounds to the above restrictions. If you post your scene file, we'll work with you to get it to go.
Yes, feedback loops are likely the construct you are looking for to solve your problem. However, there are some gotchas with them that we have not yet explained well (but I'm in the process of making another master class to better explain it):
1. Dynamic partitioning is not supported feedback loops currently.
2. Only actual outputs are capable of being fed back to the next loop iteration. Regular attributes cannot currently feedback.
There are work-arounds to the above restrictions. If you post your scene file, we'll work with you to get it to go.
- Ken Xu
- romainubi
- Member
- 5 posts
- Joined: 6月 2018
- Offline
- kenxu
- Member
- 544 posts
- Joined: 9月 2012
- Offline
Currently, trying to expand workitems from an upstream partition in a loop is hard to do. This is because each iteration of the loop depends on the previous iteration, so trying to “trace things back” to the upstream partition (so you can find the partitioned workitems to be expanded again) can be very tricky. It can be done as a work-around in some cases (which I'm not going to show here, unless you really, really want to see it), but we don't recommend it.
I've attached a file here that I *think* does what you want: you have this “sortid” attribute - I think the goal is to execute workitems based on their sortid in ascending order, using a feedback loop. I've used a sort node instead of a partition to do the sorting, and pass that off to the loop. Because we are not partitioning, there is no need to re-expand the workitems in the loop.
I've attached a file here that I *think* does what you want: you have this “sortid” attribute - I think the goal is to execute workitems based on their sortid in ascending order, using a feedback loop. I've used a sort node instead of a partition to do the sorting, and pass that off to the loop. Because we are not partitioning, there is no need to re-expand the workitems in the loop.
- Ken Xu
- romainubi
- Member
- 5 posts
- Joined: 6月 2018
- Offline
Thanks for your time !
I think it does not solve my problem, but I see that my scene is not that great to demonstrate my problem since it really only shows only one example. There is only two work-items with the same “sortid” (of value zero). Would have been more explicit if other work-items had a same “sortid”, like three work-items with “sortid” of value 1.
Let say we have two work items with “sortid 0”, two work items with “sortid 1” and one work-item with “sortid 2”.
If I not wrong, in your corrected scene, all work-items will spawn processes sequentially once they reach the loop, right ?.
Using the above example it will run like this once they reach the feedback-loop:
work-item0 sortid 0 -> work-item1 sortid 0 -> work-item2 sortid 1 -> work-item3 sortid 1 -> work-item4 sortid 2.
Meaning five process spawned one after the other in one thread.
What I need is:
work-item0 sortid 0 -> work-item1 sortid 0
//
work-item2 sortid 1 -> work-item3 sortid 1
//
work-item4 sortid 2
Three threads in parallel, witch each thread having a queue of task to do sequencially.
I hope it does make a bit more sense
A solution that I think could work is to write my own python job, use a python processor to build my work-items and add internal dependencies with work-items having the same sortid, and specify a command to use my python job. Which would be a shame since the python job would actually replicate the TOP node I need.
I have attached a scene which uses this way, it shows the order of how the tasks should execute.
Thanks a lot
I think it does not solve my problem, but I see that my scene is not that great to demonstrate my problem since it really only shows only one example. There is only two work-items with the same “sortid” (of value zero). Would have been more explicit if other work-items had a same “sortid”, like three work-items with “sortid” of value 1.
Let say we have two work items with “sortid 0”, two work items with “sortid 1” and one work-item with “sortid 2”.
If I not wrong, in your corrected scene, all work-items will spawn processes sequentially once they reach the loop, right ?.
Using the above example it will run like this once they reach the feedback-loop:
work-item0 sortid 0 -> work-item1 sortid 0 -> work-item2 sortid 1 -> work-item3 sortid 1 -> work-item4 sortid 2.
Meaning five process spawned one after the other in one thread.
What I need is:
work-item0 sortid 0 -> work-item1 sortid 0
//
work-item2 sortid 1 -> work-item3 sortid 1
//
work-item4 sortid 2
Three threads in parallel, witch each thread having a queue of task to do sequencially.
I hope it does make a bit more sense
A solution that I think could work is to write my own python job, use a python processor to build my work-items and add internal dependencies with work-items having the same sortid, and specify a command to use my python job. Which would be a shame since the python job would actually replicate the TOP node I need.
I have attached a scene which uses this way, it shows the order of how the tasks should execute.
Thanks a lot
Edited by romainubi - 2019年9月19日 08:11:40
- kenxu
- Member
- 544 posts
- Joined: 9月 2012
- Offline
Ah, ok, I understand better. So, we basically need to launch as many for loops as there are sortids. The way to do this is via the topfetch feature, which allows a separate top network to be launched per workitem. I put the for-loop in that separate top network (top_fetch_net). In the for loop, we generate the number of iterations based on the ‘partitionsize’ attribute, which is inherited from the partition. Finally, notice there is the promote_partitioned_item_attrs node immediately above the topfetch. This is so we can promote whatever attributes you need from the partitioned items to the top level, so it is accessible in the for loop. In this case, I aggregated the names of the characters onto the ‘upstream_names’ attribute.
- Ken Xu
- romainubi
- Member
- 5 posts
- Joined: 6月 2018
- Offline
Thanks for your patience Ken, this does what I need !
Do you have plan to make this kind of setup a little bit easier and faster to set up?
But I cannot get it to work with my scene though :p
There is a particular rop node in my main top node that make it fails. I will report it since I cannot put the scene in that particular forum section
Do you have plan to make this kind of setup a little bit easier and faster to set up?
But I cannot get it to work with my scene though :p
There is a particular rop node in my main top node that make it fails. I will report it since I cannot put the scene in that particular forum section
Edited by romainubi - 2019年9月20日 13:02:11
- kenxu
- Member
- 544 posts
- Joined: 9月 2012
- Offline
We are definitely planning more improvements to the loops and topfetch features - these are powerful constructs that are still under-explored. That said however, for your use case the basic structure for solving the problem won't change - the right way to dynamically launch a variable number of for loops is through the topfetch feature.
WRT the last part of your problem, it sounds like there is some issue with a specific ROP and so the problem is not related to PDG itself?
WRT the last part of your problem, it sounds like there is some issue with a specific ROP and so the problem is not related to PDG itself?
- Ken Xu
- romainubi
- Member
- 5 posts
- Joined: 6月 2018
- Offline
Cool to hear that you are planning more things on those features !
Yes I think the top fetch is a good workflow for my case. It is just the way you have to combine your partition's work item attributes to then uncombine them manually in the fetch which seems a bit of a hack :p
Yes my last problem is not related to PDG itself.
Anyway thanks again for your help !
Yes I think the top fetch is a good workflow for my case. It is just the way you have to combine your partition's work item attributes to then uncombine them manually in the fetch which seems a bit of a hack :p
Yes my last problem is not related to PDG itself.
Anyway thanks again for your help !
-
- Quick Links