Hello everyone, I have a little question about Houdini's OpenCL data storage:
In vellumsolver, many nodes are written with OpenCL. For data bindings, what I understand is to push the data from the CPU into the GPU cache. At this time, is there a piece of data always stored in the GPU cache in subsequent calculations? Or will all the data be put back into the CPU after running an OpenCL node, and then all corresponding GPU caches will be cleared?
So a new question that comes to mind about this is, in Houdini HDK, if I want to write a CUDA code myself, is there a way to directly get Houdini's GPU cache data?
Thank you so much
What is the data storage method of Houdini Opencl?
852 3 2- ljzmathematics
- Member
- 5 posts
- Joined: July 2022
- Offline
- play_w_madness
- Member
- 15 posts
- Joined: Aug. 2016
- Offline
Gas OpenCL DOP nodes have "Flush Attributes" toggle - "After writing to attributes, the new values are left on the GPU until another solver requests the geometry attributes. This lets the attributes stay there and provides the most efficiency. Turning on flush attributes forces them to be copied back from the GPU into geometry memory explicitly. This should not be required."
It's off by default, meaning the data isn't flushed into the host memory until necessary.
As for your other question, HDK provides CE_Grid struct ( https://www.sidefx.com/docs/hdk/class_c_e___grid.html [www.sidefx.com] ) which stores cl::Buffer, that you can possibly use for interop with CUDA.
Basically what you need is CE_... headers from the HDK - CE_Context, CE_VDBGrid etc.
Additionally, I recommend taking a look at
https://www.sidefx.com/docs/hdk/_g_a_s___open_c_l_8h_source.html [www.sidefx.com]
https://www.sidefx.com/docs/hdk/_s_i_m___object_8h_source.html [www.sidefx.com]
It's off by default, meaning the data isn't flushed into the host memory until necessary.
As for your other question, HDK provides CE_Grid struct ( https://www.sidefx.com/docs/hdk/class_c_e___grid.html [www.sidefx.com] ) which stores cl::Buffer, that you can possibly use for interop with CUDA.
Basically what you need is CE_... headers from the HDK - CE_Context, CE_VDBGrid etc.
Additionally, I recommend taking a look at
https://www.sidefx.com/docs/hdk/_g_a_s___open_c_l_8h_source.html [www.sidefx.com]
https://www.sidefx.com/docs/hdk/_s_i_m___object_8h_source.html [www.sidefx.com]
Edited by play_w_madness - June 14, 2024 09:49:11
- ljzmathematics
- Member
- 5 posts
- Joined: July 2022
- Offline
play_w_madness
Gas OpenCL DOP nodes have "Flush Attributes" toggle - "After writing to attributes, the new values are left on the GPU until another solver requests the geometry attributes. This lets the attributes stay there and provides the most efficiency. Turning on flush attributes forces them to be copied back from the GPU into geometry memory explicitly. This should not be required."
It's off by default, meaning the data isn't flushed into the host memory until necessary.
As for your other question, HDK provides CE_Grid struct ( https://www.sidefx.com/docs/hdk/class_c_e___grid.html [www.sidefx.com] ) which stores cl::Buffer, that you can possibly use for interop with CUDA.
Basically what you need is CE_... headers from the HDK - CE_Context, CE_VDBGrid etc.
Additionally, I recommend taking a look at
https://www.sidefx.com/docs/hdk/_g_a_s___open_c_l_8h_source.html [www.sidefx.com]
https://www.sidefx.com/docs/hdk/_s_i_m___object_8h_source.html [www.sidefx.com]
Thank you very much for the ideas you provided. I have made some attempts according to the methods you provided:
First, create a Gas OpenCL node in the DOP and pass "P" of the particles (assuming there are 500,000 particles)
Then in HDK, I tried to read data directly from the GPU buffer in combination with CE_Context:
CODE:
void GAS_OpenCL_Lissajous::accessOpenCLBuffer(const GU_Detail* gdp, float time) { CE_Context *ceContext = CE_Context::getContext(); cl::Context clContext = ceContext->getCLContext(); cl::CommandQueue clQueue = ceContext->getQueue(); cl::Buffer gpuBuffer = ...; // how to get my GPU buffer?? thrust::device_vector<Particle> deviceBuffer(500000); // 500,000 particles Particle* tempHostBuffer = new Particle[500000]; ceContext->readBuffer(gpuBuffer, sizeof(Particle) * 500000, tempHostBuffer); ... ... }
But at this time I encountered some problems. I browsed some API methods and found that most of the methods were to re-Allocate the GPU buffer and then write the data from gdp to the buffer instead of directly reading the GasOpenCL buffer. So I did not find a good way to elegantly solve the problem of obtaining the existing CL buffer. Can you give me some more tips? Thank you so much
Edited by ljzmathematics - June 15, 2024 13:40:02
- play_w_madness
- Member
- 15 posts
- Joined: Aug. 2016
- Offline
From https://www.sidefx.com/docs/hdk/_s_i_m___geometry_8h_source.html [www.sidefx.com] :
From https://www.sidefx.com/docs/hdk/_g_a___c_e_attribute_8h_source.html [www.sidefx.com] :
So, I imagine a correct way of obtaining a handle to a GPU buffer would be to obtain GA_CEAttribute ptr from SIM_Geometry
that you get from SIM_Object
that you get from GAS_SubSolver::solveGasSubclass override in your solver class.
And judging by SIM_Geometry public interface, I think it's not going to alloc/build a new buffer for attrib if there already exists one and it wasn't flushed
/// Attempt to build a GPU-backed attribute from our geometry. /// Returns 0 on failure. /// You do not own the resulting GA_CEAttribute. GA_CEAttribute* SIM_Geometry::getReadableCEAttribute(GA_AttributeOwner owner, const UT_StringRef &aname, GA_StorageClass storage, int &tuplesize, bool isarray, bool docopy) const { return getReadableCEAttributePrec(owner, aname, storage, GA_PRECISION_32, tuplesize, isarray, docopy); } virtual GA_CEAttribute* SIM_Geometry::getReadableCEAttributePrec(GA_AttributeOwner owner, const UT_StringRef &aname, GA_StorageClass storage, GA_Precision prec, int &tuplesize, bool isarray, bool docopy) const;
From https://www.sidefx.com/docs/hdk/_g_a___c_e_attribute_8h_source.html [www.sidefx.com] :
cl::Buffer GA_CEAttribute::buffer() const { return myBuffer; } cl::Buffer GA_CEAttribute::indexBuffer() const { return myIndexBuffer; }
So, I imagine a correct way of obtaining a handle to a GPU buffer would be to obtain GA_CEAttribute ptr from SIM_Geometry
that you get from SIM_Object
that you get from GAS_SubSolver::solveGasSubclass override in your solver class.
And judging by SIM_Geometry public interface, I think it's not going to alloc/build a new buffer for attrib if there already exists one and it wasn't flushed
Edited by play_w_madness - June 15, 2024 14:56:18
-
- Quick Links