Hi:
I know Karma is beta and the H18 docs are still a WIP, but wanted to ask if there is any plan to release soon a more deep documentation on Karma, and more importantly: a list of features, known bugs and current limitations so we don't report bugs and stuff you already know about.
Thanks! and congratulations for the hard work you do with each release.
Karma beta documentation
6258 11 5- jarenas
- Member
- 146 posts
- Joined: Jan. 2018
- Offline
- Wren
- Member
- 532 posts
- Joined: July 2005
- Offline
- Hamilton Meathouse
- Member
- 199 posts
- Joined: Nov. 2013
- Offline
Yeah, I agree that would be very helpful. Off the top of my head here are a bunch of things that I'm surprised aren't in there, even for Beta:
-Can't save the viewport Karma/Hydra Frames (can flipbook, but that restarts the Karma delegate rendering again)
-Can't user Renderview with Karma
-Can't render to Mplay
These three things mean we can't save/compare frames rendered within the Houdini GUI.
-Can only render to disk
-Optix happens concurrently with the Karma render, not when Karma has done resolving.
-No control over bounces/limits in Karma Node - But somehow can in viewport Display options?
-No motion-blur controls in Karma node?
-Can't save the viewport Karma/Hydra Frames (can flipbook, but that restarts the Karma delegate rendering again)
-Can't user Renderview with Karma
-Can't render to Mplay
These three things mean we can't save/compare frames rendered within the Houdini GUI.
-Can only render to disk
-Optix happens concurrently with the Karma render, not when Karma has done resolving.
-No control over bounces/limits in Karma Node - But somehow can in viewport Display options?
-No motion-blur controls in Karma node?
- mark
- Staff
- 2638 posts
- Joined: July 2005
- Offline
In today's daily build, you should be able to render to mplay - please let us know if you're still having issues.
The renderview is already a todo item for the future release. I'm not sure if we have an existing RFE for the flipbook issue - so it might be worthwhile to add one.
- Optix happens concurrently, but also when the render is finished. When rendering to disk, the denoiser should only be triggered on final frame.
- For the bounces/limits/motion blur controls, there are “Render Geometry Settings” and “Render Settings” LOPs which control this behaviour. Standard USD inheritance models should work for the geometry settings.
Also note, that for motion blur, you'll likely have to use a cache LOP to make sure Hydra has enough knowledge about adjacent frames.
The renderview is already a todo item for the future release. I'm not sure if we have an existing RFE for the flipbook issue - so it might be worthwhile to add one.
- Optix happens concurrently, but also when the render is finished. When rendering to disk, the denoiser should only be triggered on final frame.
- For the bounces/limits/motion blur controls, there are “Render Geometry Settings” and “Render Settings” LOPs which control this behaviour. Standard USD inheritance models should work for the geometry settings.
Also note, that for motion blur, you'll likely have to use a cache LOP to make sure Hydra has enough knowledge about adjacent frames.
- alexsomma
- Member
- 52 posts
- Joined: May 2015
- Offline
mark
The renderview is already a todo item for the future release.
Hi Mark & SideFX,
I must admit this causes me some concern..
First, let me start by saying how much I appreciate the effort that you’ve put into the node based USD workflow with Solaris. Ever since I first saw it, I thought it was a game changer, looking forward to the day we could use it in production.
I also want to mention we are a studio that have been doing all our layout, lighting and rendering work in Houdini since 2015.
What concerns me is that it feels like you are under-prioritizing the importance of having decent renderview features at launch. While the viewport interactivity with Karma/Hydra is very nice, the workflow when it comes to finalizing the look and dialing in the sampling is a big step back. While I do understand that the Hydra viewport is the first iteration, I still can not understand why you chose to launch without these essential features (Both from a user-concern standpoint and from a branding standpoint) We’ve relied on the legacy renderview since 2015, and even though it’s been ok, if we’re being honest it was all ready falling behind, lacking several simple productivity features that would make our everyday lives easier. Now, since you are labeling the old renderview legacy, the default renderview experience in Houdini has to be one of the least featurecomplete in the industry.
I’ve voiced this concern this several times before, but since I’m now seeing that you are scheduling a proper renderview for “a future release” I feel the need to go through this again. These things are very obvious, but I’m mentioning them since it sometimes feels like SideFX does not fully appreciate how much these features matter.
There are many ways of thinking about this, but after a scene is set up, one can divide the remaining work into two main areas; Working with the look, and dialing in the rendersettings. To get these things done efficiently there are a few things that are essential:
1. Snapshots
The snapshots are essential in both areas. When working with the look we are constantly doing snapshots to compare and make decisions on shaders, lighting and even layout. When dialing in sampling it is also necessary to switch back and forth to check what the often tiny differences between render settings are. Without this we have to remember what the previous look was, or what the previous indirect noise level was, and that’s just unrealistic.
Legacy renderview all ready missing features: Keeping snapshots between sessions, thumbnails, renderinfo, scene states, side by side, tying snapshots with takes
Current workaround: Save image to disk and open in compositing package. Extremely slow workflow made even slower because we often don’t want to render the whole image when we work this way. We could try working with the ancient viewport snapshot feature but its a horrible experience.
2. Render region
Another ultra-essential feature. Even with the nice interactivity we get in the Solaris viewport a quick render region workflow is essential. The usual workflow is to see things through the shot camera but to constantly drag regions where you need to focus your attention. The speedup from this can’t be overstated - Getting a small region of the final look & quality, in the actual render resolution, allows us to make judgements in a fraction of the time.
Current workaround: Set up cameras and crops for every region you are interested in and render these to disk - Not viable. Maybe we can make a tiny solaris viewport and use the new 2D viewport feature to bounce around? Nope, no snapshot so cant compare results anyways.
3. Fixed resolution with fit window and 1:1 pixel modes
Also very important for both. Shaders need to be dialed in to have enough detail for the final resolution, and for sampling it is crucial to see the output at 1:1. By being able to switch between fit window and 1:1 mode, we can render and judge both the whole look and inspect carefully with the same render.
Current workaround: Tried to use the “image view” but no way of being sure it’s at 1:1 and no way to fit window.
There are many other issues with the missing features but these are the most essential ones as we see it.
Even though Karma itself is beta, the Solaris viewport is not. We use Arnold, and I’m concerned this lack of features will affect the third party renderers for a long time as well, slowing the adoption of LOPs considerably, unless they decide (hopefully) to make their own renderviewers.
In your presentation, BlueSky Studios spoke about the adoption of LOPs, but they had their own renderview integration. I’m glad they are able to work with LOPs, but that doesn’t help the rest of us.
My guess is that most of you at SideFX are all ready aware of this, but since you still ended up releasing Houdini 18 without these major workflow elements solved, your conclusion seem to have been that the users can work good enough without these, and that’s the bubble I want to burst.
Since Solaris is such an awesome way of setting up our scenes and lighting them, it’s a shame it all ends in such an awkward rendering experience.
For us, this results in a big wow effect when first opening up Solaris, but at the end of the day, we’re not able to use it for actual production.
Edited by alexsomma - Dec. 3, 2019 12:22:09
VFX Supervisor @ Helmet Films & Visual Effects
http://www.helmet.no [helmet.no]
http://www.helmet.no [helmet.no]
- michaelblackbourn
- Member
- 10 posts
- Joined: April 2017
- Offline
Alex, we're in a very similar situation. We currently use Houdini and redshift for all our lookdev, fx, lighting, and rendering work. I'm the VFX supe for The Embassy.
We have gone to considerable effort over the past few years to add features like shared primitives between discreet asset, internal instancing of primitives within assets, layered overrides using a patchwork of Python and otls… But it's a little inelegant, particularly when educating new artists.
Usd's composition concept is fantastic and I can't wait to move our pipeline over to it. It's the endgame of a pile of ideas we've had for years (it makes sense that pixar has suffered and solved the same issues, but with orders of magnitude higher r+d budget).
To adopt lops we will need a solid renderer (I'm expecting redshift to get there first) and a renderview that is at least as functional as the previous one in Houdini. We're just now getting h18 setup at work and we'll see how redshifts lops integration is.
Also… As mentioned in another thread, currently Karma has no isPhantom (or primary Rays off) Geo render setting. Which is another big hole is how we work in Houdini. Is there a better forum to discuss this and other absolute needs for adoption? Maybe I should take this to a beta location (this is my personal email account, but I'm sure we have access at work)…
We have gone to considerable effort over the past few years to add features like shared primitives between discreet asset, internal instancing of primitives within assets, layered overrides using a patchwork of Python and otls… But it's a little inelegant, particularly when educating new artists.
Usd's composition concept is fantastic and I can't wait to move our pipeline over to it. It's the endgame of a pile of ideas we've had for years (it makes sense that pixar has suffered and solved the same issues, but with orders of magnitude higher r+d budget).
To adopt lops we will need a solid renderer (I'm expecting redshift to get there first) and a renderview that is at least as functional as the previous one in Houdini. We're just now getting h18 setup at work and we'll see how redshifts lops integration is.
Also… As mentioned in another thread, currently Karma has no isPhantom (or primary Rays off) Geo render setting. Which is another big hole is how we work in Houdini. Is there a better forum to discuss this and other absolute needs for adoption? Maybe I should take this to a beta location (this is my personal email account, but I'm sure we have access at work)…
- michaelblackbourn
- Member
- 10 posts
- Joined: April 2017
- Offline
- michaelblackbourn
- Member
- 10 posts
- Joined: April 2017
- Offline
- kskovbo
- Member
- 47 posts
- Joined: Feb. 2020
- Offline
- nicholasralabate
- Member
- 114 posts
- Joined: Sept. 2017
- Offline
One more resurrection for 19.0, hopefully the last. I just wanted to put my vote in for render region as well. I'm trying out Karma XPU this month and it's... interesting that it's been almost two years and render region is still a work in progress. Crossing my fingers for 19.5.
Edited by nicholasralabate - Feb. 10, 2022 17:22:34
- jsmack
- Member
- 8041 posts
- Joined: Sept. 2011
- Offline
nicholasralabate
One more resurrection for 19.0, hopefully the last. I just wanted to put my vote in for render region as well. I'm trying out Karma XPU this month and it's... interesting that it's been almost two years and render region is still a work in progress. Crossing my fingers for 19.5.
Switch to 2D pan mode, then hold shift while box selecting to create a render region. (with Houdini 19.0)
- nicholasralabate
- Member
- 114 posts
- Joined: Sept. 2017
- Offline
-
- Quick Links