JZ Li
ljzmathematics
About Me
Connect
LOCATION
Not Specified
WEBSITE
Houdini Skills
Availability
Not Specified
Recent Forum Posts
What is the data storage method of Houdini Opencl? June 15, 2024, 1:37 p.m.
play_w_madness
Gas OpenCL DOP nodes have "Flush Attributes" toggle - "After writing to attributes, the new values are left on the GPU until another solver requests the geometry attributes. This lets the attributes stay there and provides the most efficiency. Turning on flush attributes forces them to be copied back from the GPU into geometry memory explicitly. This should not be required."
It's off by default, meaning the data isn't flushed into the host memory until necessary.
As for your other question, HDK provides CE_Grid struct ( https://www.sidefx.com/docs/hdk/class_c_e___grid.html [www.sidefx.com] ) which stores cl::Buffer, that you can possibly use for interop with CUDA.
Basically what you need is CE_... headers from the HDK - CE_Context, CE_VDBGrid etc.
Additionally, I recommend taking a look at
https://www.sidefx.com/docs/hdk/_g_a_s___open_c_l_8h_source.html [www.sidefx.com]
https://www.sidefx.com/docs/hdk/_s_i_m___object_8h_source.html [www.sidefx.com]
Thank you very much for the ideas you provided. I have made some attempts according to the methods you provided:
First, create a Gas OpenCL node in the DOP and pass "P" of the particles (assuming there are 500,000 particles)
Then in HDK, I tried to read data directly from the GPU buffer in combination with CE_Context:
CODE:
void GAS_OpenCL_Lissajous::accessOpenCLBuffer(const GU_Detail* gdp, float time) { CE_Context *ceContext = CE_Context::getContext(); cl::Context clContext = ceContext->getCLContext(); cl::CommandQueue clQueue = ceContext->getQueue(); cl::Buffer gpuBuffer = ...; // how to get my GPU buffer?? thrust::device_vector<Particle> deviceBuffer(500000); // 500,000 particles Particle* tempHostBuffer = new Particle[500000]; ceContext->readBuffer(gpuBuffer, sizeof(Particle) * 500000, tempHostBuffer); ... ... }
But at this time I encountered some problems. I browsed some API methods and found that most of the methods were to re-Allocate the GPU buffer and then write the data from gdp to the buffer instead of directly reading the GasOpenCL buffer. So I did not find a good way to elegantly solve the problem of obtaining the existing CL buffer. Can you give me some more tips? Thank you so much
What is the data storage method of Houdini Opencl? June 13, 2024, 10:26 p.m.
Hello everyone, I have a little question about Houdini's OpenCL data storage:
In vellumsolver, many nodes are written with OpenCL. For data bindings, what I understand is to push the data from the CPU into the GPU cache. At this time, is there a piece of data always stored in the GPU cache in subsequent calculations? Or will all the data be put back into the CPU after running an OpenCL node, and then all corresponding GPU caches will be cleared?
So a new question that comes to mind about this is, in Houdini HDK, if I want to write a CUDA code myself, is there a way to directly get Houdini's GPU cache data?
Thank you so much
In vellumsolver, many nodes are written with OpenCL. For data bindings, what I understand is to push the data from the CPU into the GPU cache. At this time, is there a piece of data always stored in the GPU cache in subsequent calculations? Or will all the data be put back into the CPU after running an OpenCL node, and then all corresponding GPU caches will be cleared?
So a new question that comes to mind about this is, in Houdini HDK, if I want to write a CUDA code myself, is there a way to directly get Houdini's GPU cache data?
Thank you so much
Can use Eigen C++ library in Vex or OpenCL? April 10, 2024, 7:25 a.m.
animatrix_Many thanks for your advice, looks like a very good and interesting solution!
Hi,
You can use it inside Houdini using C++ Wrangle:
This guy integrated the Eigen library also:
https://github.com/lecopivo/cpp-wrangle [github.com]
So you can try that version to access the Eigen library directly. This is the closest you can get albeit not inside VEX or OpenCL.