clEnqueueNDRangeKernel Vellum bug ?

   9762   18   2
User Avatar
Member
70 posts
Joined: 11月 2017
Offline
When I use Vellum with a simplest setup as soon as I use Pressure option as Constraints I get this msg in a Houdini Console

"OpenCL Exception: clEnqueueNDRangeKernel (-52). This is probably due to incorrect number of kernel parametrs."

Does anyone know what this means? I always make sure to turn off all OpenCL options because it messes up all my sims so I always block that option so that's not an issue, but what else could it be ?
I have 16 core AMD Ryzen 3950X, NVIDIA 2080 RTX and 128 GB of RAM so I know it's not my computer.Any idea of what this means?
Thank you guys.
User Avatar
Member
70 posts
Joined: 11月 2017
Offline
Actually it WAS OpenCL, my worst enemy in Houdini. If there was a way to uninstall it I would I swear. So far the best I found is Edit-Preferences-Miscellaneous and change it from GPU to CPU. Hope to help someone with this OpenCL bug at least. I was hoping there would be an OFF option (since I can't delete it) but at least now it can accomplish this very basic simulation at least.
User Avatar
Member
8041 posts
Joined: 9月 2011
Offline
Vellum uses OpenCL. If you want to avoid OpenCL, don't use Vellum or other nodes that are OpenCL based. The error you report sounds like the native command queuing feature that was added in 18.5 that was incompatible with Nvidia Turning architecture until they fixed it in a driver update. Try installing an Nvidia driver from after November 2020 to see if the error goes away. I've also seen that error when the simulation doesn't fit into memory, which can happen if there's too many points or constraints. The RTX 2080 only has 8GB of VRAM, which can be quite limiting. 16GB of VRAM is recommended for bigger OpenCL sims.
User Avatar
Member
70 posts
Joined: 11月 2017
Offline
jsmack
Vellum uses OpenCL. If you want to avoid OpenCL, don't use Vellum or other nodes that are OpenCL based. The error you report sounds like the native command queuing feature that was added in 18.5 that was incompatible with Nvidia Turning architecture until they fixed it in a driver update. Try installing an Nvidia driver from after November 2020 to see if the error goes away. I've also seen that error when the simulation doesn't fit into memory, which can happen if there's too many points or constraints. The RTX 2080 only has 8GB of VRAM, which can be quite limiting. 16GB of VRAM is recommended for bigger OpenCL sims.


I was only trying to squash inflated torus. Simple 4 node simulation. In 3DsMax with TyFlow and CUDA engine I was capable of much more without hickups and with this same setup. Actually when I had 64GB RAM 3dsMax was handling way more than Houdini can at 128RAM. Idk I think Houdini is really really bad with optimizing resources. I am able to do 100 Million particle fluid simulations with Phoenix FD in an overnight sims in 3DsMax but Houdini would never be able to handle more than 10 million on the same machine unless one has a render farm. I really hope SideFX tries to optimize Houdini for a single user rather than a farm of computers. That being said, RTX 2080 is still considered by many standards good GPU so I am not sure what am I suppose to do if I want to use Vellum even for smaller sims. There is no computer good enough in this world that would fit Houdini's computational needs because it's made for render farms not a single user I get that. That's why they have that DISTRIBUTION tab because they never intended us to try and do big sims and renders at home as solo users it is made for studios but I am running small sims and it can still barely do it on this maxed out computer. I hope they either make a separate version of Houdini for a single user needs or they work on their optimisation because this now is quite catastrofhic in terms of computational needs VS the output. Is there a way to use Vellum without Open CL ? My GPU drivers are up to date because if I skip updating it than I have issue with other stuff in Houdini.
So would you recommend dropping to earlier better and more stable versions of Houdini (because all these new updates brought a TON of issues) or just using FEM and GRAINS instead of VELLUM ? Thank you.
Edited by Nikodim Fomich - 2021年4月23日 18:38:24
User Avatar
Member
48 posts
Joined: 8月 2017
Offline
Hello, I have the exact same problem when using the pressure constraint with vellum on even a simple sphere, I use the very last nvidia driver. This bug only started to appear today ( I haven't used the pressure feature in over 1-2 week so i'm not sure if the issues come from an houdini upgrade or a driver upgrade, I do believe I upgrade my nvidia driver last week tho).
I use a 3090 TI with 24GB of vram and I'm trying to simulate a 42point sphere, so the vram is definitely not the problem here.
Edited by SciTheSqrl - 2021年4月25日 17:35:11

Attachments:
001204_houdini_JJf2O3SynI_2021-04-25_23-34-14.png (330.3 KB)

User Avatar
Member
48 posts
Joined: 8月 2017
Offline
So I tried to install the Studio Ready driver from nvidia instead of the Game ready one and it it fixed the problem.
I tried to reinstall the Game Ready driver twice and the problem was still there, so the workaround for now would be to keep using the studio driver.
User Avatar
Member
250 posts
Joined: 3月 2013
Offline
Not sure why you're unloading on houdini, could be you need to learn to optimize more. If it were so terrible, we'd
all still be using Max + TP + Fumefx, which is not the case.

First page of the what's new in H18.5 Vellum lists Driver versioning for use of Pressure constraints, because they
have been optimized.

what's new Vellum [www.sidefx.com]

Your GPU is more than powerful enough to run a bunch of openCL tools in houdini, this will come down to Driver versions.
I'm running a 980ti, and a Quadro at home, zero problems.

by the way, Grains are the same family, they are PBD, Vellum is just xPBD, so not much difference at all, only thing would
be that Vellum code is looked at more often that pop grains XPD potentially, so in general, Vellum grains will be quicker.


L
Edited by lewis_T - 2021年4月26日 03:44:19
I'm not lying, I'm writing fiction with my mouth.
User Avatar
Member
48 posts
Joined: 8月 2017
Offline
The problem here is a recent one, we're discussing potential driver that simply break some compatibility, this probleme started almost at the same time for me as the host of this thread, so it highly likely that has nothing to do with the hardware, but a recent update.
User Avatar
Member
8041 posts
Joined: 9月 2011
Offline
SciMunk
The problem here is a recent one, we're discussing potential driver that simply break some compatibility, this probleme started almost at the same time for me as the host of this thread, so it highly likely that has nothing to do with the hardware, but a recent update.

So Nvidia broke it with a driver update. That's a bummer. I remember during beta for the new pressure constraints using Native command queueing, the feature was incompatible with Turing generation Nvidia hardware until they released a new driver to fix it. I'm guessing they released a driver that broke that fix.
User Avatar
Member
48 posts
Joined: 8月 2017
Offline
yeah, fortunately, the studio version of the driver do work, so that a good temporary workaround (or permanent, it not like they are bad)
Edited by SciTheSqrl - 2021年4月26日 07:23:15
User Avatar
Member
250 posts
Joined: 3月 2013
Offline
yeah it has nothing to do with hardware, that was my point in the post. It is Driver related.
But I listed both my Quadro and 980ti, to show that it does indeed work. You should always be careful with Drivers
as updating to solve some gaming problem is going to risk breaking something in another app.
Latest is not the greatest.
I'm not lying, I'm writing fiction with my mouth.
User Avatar
Member
30 posts
Joined: 12月 2016
Offline
Yeah I am having same problem as OP very simple network and I get the following error = OpenCL Exception: clEnqueueNDRangeKernel (-52)

What are they doing to fix this? it is very annoying I have never had weird problems like this when using c4d
User Avatar
Member
2 posts
Joined: 1月 2019
Offline
I had this same issue today August 2021.With simple vellum set up after a NVIDIA driver update.I tried changing it to studio.It didnt work.What worked for me was to go back to an older driver update.i went back to November 2020 and it fixed the problem.Annoying tho now in After Effects I get a constant message about a drive error but everything works in houdini and AE.
User Avatar
Member
1 posts
Joined: 7月 2021
Offline
Def a driver issue. Just ran into the exact same thing when attempting Entagma's Silly Pillow tutorial and after having successfully used Vellum quite a bit yesterday. I rolled back the driver to the Nov 2020 one as well:


GeForce Game Ready Driver

Version: 457.30 WHQL
Release Date: 2020.11.9
Operating System: Windows 10 64-bit
Language: English (US)
File Size: 586.48 MB

Link to Driver [www.nvidia.com]
User Avatar
Member
8041 posts
Joined: 9月 2011
Offline
It's working fine with the game ready driver version 471.68 on Ampere. Has anyone tried with the current driver (471.96) yet to see if that works?
User Avatar
Member
2 posts
Joined: 8月 2013
Offline
Was having same issue on a laptop quadro A4000, had to roll back from Studio 472.47, a couple to a 460... 463.15, which resolved the problem. Would mean no Optix in Hou19 Karma XPU.

As an aside, the sidefx release notes' HOUDINI_OCL_FEATURE_DISABLE=CL_DEVICE_DEVICE_ENQUEUE_SUPPORT failed to fix the issue (not that I wanted it disabled though, just curious).
User Avatar
Member
1 posts
Joined: 2月 2020
Offline
If it helps anyone, I also had this problem when I updated my driver to 511.65 I also tried 497.29, and 471.96 they didn't work


But 457.30 work fine with RTX 2080 desktop windows 10
Thank you njansen for the help
User Avatar
Member
1 posts
Joined: 9月 2019
Offline
I don't think it's driver fault
I have been that error in 18.5.
I just use 17.5 version, and didn't get any error.
User Avatar
Member
8041 posts
Joined: 9月 2011
Offline
naseful
I don't think it's driver fault
I have been that error in 18.5.
I just use 17.5 version, and didn't get any error.

17.5 didn't use native command queuing so it was impossible to have that error.
  • Quick Links