This topic has been brought up in several posts but I am still having some questions for specific workflows, even after watching Mark Tucker's Hive presentation.
What would be the best performing approach for the following example?
Given a user given pattern of thousands of primitives, we have to run some checks accessing to some of its attributes. If the check is successful we create n references per primitive with data fetched from the original primitive.
I would instinctively do as follows;
read_stage = node.inputs()[0].stage()
...
write_stage = node.editableStage()
for path in paths:
# Read attribute per prim from read_stage
# Write: Create References in write_stage
This has never printed a warning and I have never felt a poor performance. However, members of SideFx involved in Solaris have repeatedly told me this is dangerous and potentially worse performance wise, because of how the stage locks work.
What I was told would be something like this:
read_stage = node.inputs()[0].stage()
...
# Read
for path in paths:
# Read attributes
# Store in arrays all needed data.
# Write
write_stage = node.editableStage()
for path in paths:
# Access the arrays for the necessary data
# Create references
My question here is, even when we are dealing with thousands of paths and accesses, is this second approach better in performance? How is the first approach dangerous?
Thanks!