Processing multiple wedged simulations
9189 29 2- Tyler Britton2
- Member
- 85 posts
- Joined: April 2014
- Offline
I have a PDG network work I am generating 5 work items from a @wedge attribute using a wedge top node. I am then running a sim to wedge the emission of the source, from frames 1-100. I then want to do another post operation from those results (for example, blurring the volume), per using my original @wedge attribute. I can use the `@pdg_input` variable inside of the file node to pick up the sim from the corresponding work item, and it all works fine.
However, when I set the frame range for my 2nd step (blurring post operation) for the 1-100 frame range, I get 50,000 new work items for that second step (because it is taking my orignal frames (5 wedges*100 frames), and doing that for 100 frames again for the next step). However, I only want 100 work items per wedge index, since I am only picking up the resulting frames from my original sim.
I have tried a number of things, including isolating the first frame of each wedge and then stepping through the sim with a @frame attribute in a file sop, but it still sticks to that first frame and does not run through the whole sim.
I know this might sound a big confusing, and I can do a hip file when I get home, but I was wondering if someone could lead me into the right direction off the bat. I was trying to use parition top nodes to group my original @wedge work intems into single work items after my initial pyro sim, but with no success.
However, when I set the frame range for my 2nd step (blurring post operation) for the 1-100 frame range, I get 50,000 new work items for that second step (because it is taking my orignal frames (5 wedges*100 frames), and doing that for 100 frames again for the next step). However, I only want 100 work items per wedge index, since I am only picking up the resulting frames from my original sim.
I have tried a number of things, including isolating the first frame of each wedge and then stepping through the sim with a @frame attribute in a file sop, but it still sticks to that first frame and does not run through the whole sim.
I know this might sound a big confusing, and I can do a hip file when I get home, but I was wondering if someone could lead me into the right direction off the bat. I was trying to use parition top nodes to group my original @wedge work intems into single work items after my initial pyro sim, but with no success.
- Ostap
- Member
- 209 posts
- Joined: Nov. 2010
- Offline
- kenxu
- Member
- 544 posts
- Joined: Sept. 2012
- Offline
Hi Tyler,
It sounds like for your second step, at the very least you should be setting the “evaluate using” parameter to “single frame” instead of “frame range”. If for your blurring operation you need to have multiple upstream frames, then consider a partition by frame node. You shouldn't need mappers for this (mappers are an advanced topic, that mostly applies to tying together two procedural chains, such as for example handling some manual edits on top of an existing procedural chain. we'll have tutorials around it in the future once people get more used to using partitioners).
It sounds like for your second step, at the very least you should be setting the “evaluate using” parameter to “single frame” instead of “frame range”. If for your blurring operation you need to have multiple upstream frames, then consider a partition by frame node. You shouldn't need mappers for this (mappers are an advanced topic, that mostly applies to tying together two procedural chains, such as for example handling some manual edits on top of an existing procedural chain. we'll have tutorials around it in the future once people get more used to using partitioners).
- Ken Xu
- Ostap
- Member
- 209 posts
- Joined: Nov. 2010
- Offline
- kenxu
- Member
- 544 posts
- Joined: Sept. 2012
- Offline
Hi Ostap,
Very good point. If your second set of frame range cannot be procedurally derived from the first set of frame range, then it is a separate chain of proceduralism, and mappers are the right construct to tie together the two chains.
To this point we have not pushed hard on this topic (there are literally no tutorials that touch mappers at this moment), because we felt the community needs time to even absorb the first two major constructs in PDG: processors and partitioners.
That said, mappers are a powerful and necessary constructs, that can model in any kind of human (and thus non-procedural) interaction with a procedural system. For example, if I have a procedural building and want to do some manual decorations on top of the procedurally generated content, but I want to maintain those manual edits even if I go and update the procedural building by changing its parameters. The manual edits are a human interaction that cannot be derived procedurally, and to this point how to properly maintain the manual edits on top of the procedural content has been a tough problem plaguing the community. Mappers offer a framework that can solve this problem, and other problems like it, such as topology independent editing of procedural content, which is another example manual interaction with a procedural system.
You have hit on another example with the above. Kudos - this shows your understanding of the system is becoming quite advanced.
Very good point. If your second set of frame range cannot be procedurally derived from the first set of frame range, then it is a separate chain of proceduralism, and mappers are the right construct to tie together the two chains.
To this point we have not pushed hard on this topic (there are literally no tutorials that touch mappers at this moment), because we felt the community needs time to even absorb the first two major constructs in PDG: processors and partitioners.
That said, mappers are a powerful and necessary constructs, that can model in any kind of human (and thus non-procedural) interaction with a procedural system. For example, if I have a procedural building and want to do some manual decorations on top of the procedurally generated content, but I want to maintain those manual edits even if I go and update the procedural building by changing its parameters. The manual edits are a human interaction that cannot be derived procedurally, and to this point how to properly maintain the manual edits on top of the procedural content has been a tough problem plaguing the community. Mappers offer a framework that can solve this problem, and other problems like it, such as topology independent editing of procedural content, which is another example manual interaction with a procedural system.
You have hit on another example with the above. Kudos - this shows your understanding of the system is becoming quite advanced.
- Ken Xu
- Tyler Britton2
- Member
- 85 posts
- Joined: April 2014
- Offline
Thanks for the help guys. The partitioning works great but I am only getting it for one of my sims. I have not used partitioners a lot, so I am sure it is something I am doing incorrectly. I made a example hip file with the setup, it would be great if I could have some more guidance on how to complete the setup, maybe even using some mappers if I need too…? I have not found many good examples on mappers as Ken said.
The frame range is 1-20, but in my real production case all the wedges are different frame ranges, so it would be nice to have that setup for that case.
The frame range is 1-20, but in my real production case all the wedges are different frame ranges, so it would be nice to have that setup for that case.
- kenxu
- Member
- 544 posts
- Joined: Sept. 2012
- Offline
Hi Tyler,
I took a quick look at your file. While I may not know for sure whether what I see is causing the problem, I did notice at least a few problems:
1) The wedge node is creating 5 wedges on a integer attribute from 0 - 1. This means that 2/5 wedges will have a value of 0, and 3/5 wedges will have the value of 1. So for all intents and purposes, there are 2 wedges, and not 5.
2) The Blur_FrameRange node appears to be pointing at the wrong part of the network - it is pointing to the original sim and not the blur part of the network.
3) I looked around and saw no mention of @wedge (the name of the wedge attribute being created in the beginning) being used any where in the DOP or SOP network. Maybe I missed something, but unless that is actually used in those networks, there will be no variation.
I took a quick look at your file. While I may not know for sure whether what I see is causing the problem, I did notice at least a few problems:
1) The wedge node is creating 5 wedges on a integer attribute from 0 - 1. This means that 2/5 wedges will have a value of 0, and 3/5 wedges will have the value of 1. So for all intents and purposes, there are 2 wedges, and not 5.
2) The Blur_FrameRange node appears to be pointing at the wrong part of the network - it is pointing to the original sim and not the blur part of the network.
3) I looked around and saw no mention of @wedge (the name of the wedge attribute being created in the beginning) being used any where in the DOP or SOP network. Maybe I missed something, but unless that is actually used in those networks, there will be no variation.
- Ken Xu
- Tyler Britton2
- Member
- 85 posts
- Joined: April 2014
- Offline
Thanks for taking a look at it.
1) Yes this was a mistake. Fixed.
2) Fixed
3) Its in my noise offset of my emitter. I am not really going for a something amazing I am just trying to figure out this partitioning.
I uploaded the scene with the fixes, but am not getting the 5 wegded blurred results of the sim that I am looking for (only getting one of them). Thanks for your help.
1) Yes this was a mistake. Fixed.
2) Fixed
3) Its in my noise offset of my emitter. I am not really going for a something amazing I am just trying to figure out this partitioning.
I uploaded the scene with the fixes, but am not getting the 5 wegded blurred results of the sim that I am looking for (only getting one of them). Thanks for your help.
- kenxu
- Member
- 544 posts
- Joined: Sept. 2012
- Offline
Ok, here is the problem - the blur part of the network is something that only accepts 1 of 5 inputs. A single file sop can only read 1 of the 5 upstream images. In order to blur all 5 images, you'll need something that has 5 file sops (or a single file merge sop), each being set to `@pdg_input.0`, `@pdg_input.1`, `@pdg_input.2` …etc. The the rest of your sop network will need properly blend those together.
Edited by kenxu - May 9, 2019 14:34:07
- Ken Xu
- Tyler Britton2
- Member
- 85 posts
- Joined: April 2014
- Offline
Thanks for the clafication, I never knew you could do `@pdg_input.1`. However, I am trying to set up work items for them individually, so I get 5 sets of 1-100 frames, each outputting the blurred result of the incoming sim. I am sorry for not making that clearer.
- Ostap
- Member
- 209 posts
- Joined: Nov. 2010
- Offline
- chrisgreb
- Member
- 603 posts
- Joined: Sept. 2016
- Offline
- ChristopherC
- Member
- 19 posts
- Joined: Dec. 2013
- Offline
Building upon Ostap's question of having two ROP Fetch nodes connected with different frame ranges, would it make sense to have a new option for the ‘Evaluate Using’ parameter that would be named something along the lines of ‘Match Frame Range’ and that would basically allow a ROP node to map its worker dependencies according to the ‘range’ attribute passed down by other nodes upstream?
Edited by ChristopherC - May 10, 2019 21:56:37
- kenxu
- Member
- 544 posts
- Joined: Sept. 2012
- Offline
Hi ChristopherC, I'm not sure I totally understand, but if one could procedurally determine the current frame range from the upstream, then partitioning is the way to go. If the frame range is manually specified and not procedurally derived from upstream, then put a mapper in between with the appropriate specifications.
I don't think it makes sense to have the mapped range passed down from a workitem because then the specification on how to do the mapping is per workitem, and mappers are supposed to specify dependency relationships from all upstream workitems to all downstream workitems. The rule with which it does so is not on a per workitem basis.
I don't think it makes sense to have the mapped range passed down from a workitem because then the specification on how to do the mapping is per workitem, and mappers are supposed to specify dependency relationships from all upstream workitems to all downstream workitems. The rule with which it does so is not on a per workitem basis.
Edited by kenxu - May 13, 2019 13:46:41
- Ken Xu
- Tyler Britton2
- Member
- 85 posts
- Joined: April 2014
- Offline
chrisgreb
If you just want to run a sim again for each wedge, you can use a partitionbyattribute for the wedgeindex value and then generate new simulations from each of those partitions.
Thank you for the help, but it looks like the Blur process is only getting the first frame of the Sim step, even though the frame attribute is picking up the current frame…
- chrisgreb
- Member
- 603 posts
- Joined: Sept. 2016
- Offline
Tyler Britton2
it looks like the Blur process is only getting the first frame of the Sim step, even though the frame attribute is picking up the current frame…
Yes, sorry that wasn't quite right. If you look at the Output files on one of the partitions in partitionbyattribute1 you'll see that all the wedge frames are there, because the outputs have been merged. Since the expression on the File SOP is `@pdg_input`, it always reads the first element in that list of inputs, which is always element 0 (frame 1).
There are two ways to fix this:
1. Change the File expression to use the correct element of the list ($F-1): `pdginput($F-1, “”, 1)`
Or
2. On Blur_FrameRange, set the Expand Input Files Across Frame Range toggle. This means the input list of each work item will be set to only one of the elements from the partition's outputs, and so your original expression will work.
Edited by chrisgreb - May 13, 2019 14:39:36
- Tyler Britton2
- Member
- 85 posts
- Joined: April 2014
- Offline
chrisgrebTyler Britton2
it looks like the Blur process is only getting the first frame of the Sim step, even though the frame attribute is picking up the current frame…
Yes, sorry that wasn't quite right. If you look at the Output files on one of the partitions in partitionbyattribute1 you'll see that all the wedge frames are there, because the outputs have been merged. Since the expression on the File SOP is `@pdg_input`, it always reads the first element in that list of inputs, which is always element 0 (frame 1).
There are two ways to fix this:
1. Change the File expression to use the correct element of the list ($F-1): `pdginput($F-1, “”, 1)`
Or
2. On Blur_FrameRange, set the Expand Input Files Across Frame Range toggle. This means the input list of each work item will be set to only one of the elements from the partition's outputs, and so your original expression will work.
Hey Chris, I tried both of those, and the #1 is erroring on the expression, and the Expand Input Files solution for #2 is still only bringing in the first frames. I am working in 17.5.173, could that be why?
- chrisgreb
- Member
- 603 posts
- Joined: Sept. 2016
- Offline
- ChristopherC
- Member
- 19 posts
- Joined: Dec. 2013
- Offline
Hello Ken,
Right now we're mostly thinking on how to use PDG in an ‘animation’ context, where each worker would represent a frame. I believe that an artist connecting two ROP Geometry nodes with different frame ranges would intuitively expect having the worker dependencies to be connected accordingly to the frames that they represent. At least that's what I'd expect! See the screenshot below for an example.
We were thinking of writing such a mapper but then it might quickly become confusing and cumbersome for artists used to the ROP context if they'd suddenly have to remember adding a new mapper node before each ROP node (geometry, mantra, …) in their TOP network, depending if the current ROP's frame range differs from the upstream nodes or not. That's why I was hoping that this could be streamlined so, as users, we'd only have to remember setting this ‘Match Frame Range’ option to get the dependencies to map on a per-frame basis as we'd expect.
To make it user-friendly, I guess that we could always wrap each ROP node into a custom subnetwork that'd include such a mapper, but that would be great not to have to go down this road, if possible at all.
But maybe I'm missing something, I still have much to digest before grasping PDG!
Right now we're mostly thinking on how to use PDG in an ‘animation’ context, where each worker would represent a frame. I believe that an artist connecting two ROP Geometry nodes with different frame ranges would intuitively expect having the worker dependencies to be connected accordingly to the frames that they represent. At least that's what I'd expect! See the screenshot below for an example.
We were thinking of writing such a mapper but then it might quickly become confusing and cumbersome for artists used to the ROP context if they'd suddenly have to remember adding a new mapper node before each ROP node (geometry, mantra, …) in their TOP network, depending if the current ROP's frame range differs from the upstream nodes or not. That's why I was hoping that this could be streamlined so, as users, we'd only have to remember setting this ‘Match Frame Range’ option to get the dependencies to map on a per-frame basis as we'd expect.
To make it user-friendly, I guess that we could always wrap each ROP node into a custom subnetwork that'd include such a mapper, but that would be great not to have to go down this road, if possible at all.
But maybe I'm missing something, I still have much to digest before grasping PDG!
- kenxu
- Member
- 544 posts
- Joined: Sept. 2012
- Offline
Hi Christopher,
Now that I think about it some more, in this particular case we may not need a mapper. The reason is that it is in fact possible in this case to procedurally derive the right relationships with the information given. If we already know the upstream frame range and downstream frame range ahead of time, then we have all the information we need to connect the right frames together. However, we are missing an “Evaluate Using” mode to allow for disjoint frame ranges like that. We will add this in, and this should solve your problem neatly.
In the mean time, here is a work around. It's not elegant - what I described above is the right way to solve the problem, but it's instructive so we'll post it here. The idea is to generate all the needed frames, then filter out the frames you don't want.
Finally, what I said about mappers and their functionality earlier stands. The key to us managing to solve this problem without mappers is because we've figured out a way to procedurally derive the connectivity information with the settings the user has entered ahead of time. If for example we wanted to pick out a few procedurally generated faces of a building for further processing (eg. add decorations to it), then those faces won't even exist until the building is generated, and so won't know how to connect the decoration operations to the faces ahead of time like we did here.
Now that I think about it some more, in this particular case we may not need a mapper. The reason is that it is in fact possible in this case to procedurally derive the right relationships with the information given. If we already know the upstream frame range and downstream frame range ahead of time, then we have all the information we need to connect the right frames together. However, we are missing an “Evaluate Using” mode to allow for disjoint frame ranges like that. We will add this in, and this should solve your problem neatly.
In the mean time, here is a work around. It's not elegant - what I described above is the right way to solve the problem, but it's instructive so we'll post it here. The idea is to generate all the needed frames, then filter out the frames you don't want.
Finally, what I said about mappers and their functionality earlier stands. The key to us managing to solve this problem without mappers is because we've figured out a way to procedurally derive the connectivity information with the settings the user has entered ahead of time. If for example we wanted to pick out a few procedurally generated faces of a building for further processing (eg. add decorations to it), then those faces won't even exist until the building is generated, and so won't know how to connect the decoration operations to the faces ahead of time like we did here.
Edited by kenxu - May 14, 2019 11:54:19
- Ken Xu
-
- Quick Links