Don't you know, that hydra (and Solaris next) lies you about sphere geometry? It is not an issue of Solaris - it is really Hydra provides you worst approximation possible. Just put sphere and cube, merge them, and tweak cue size to see an inconsistence.
When OpenGL draws for you something similar to latlong divided sphere with two 10-valence poles - it is not an approximation. It is REAL geometry all renderer should get instead of sphere. 82 points. 90 polygons. 2.3Kb (min) of data per mesh - almost 600 times more than one float required to describe sphere radius. So, if you place a sphere primitive in memory - you just HEAVILY WASTE memory. Just because "we support only meshes, curves, points and volumes" through hydra says at Pixar. Most trivial shape type, which is implemented even in education renderers - and we have almost standard of geometry handling like Hydra which lost this trivial feature. I cannot imagine, why production-oriented renderers should cut their functionality for this lowest common denominator then/
It is better to kick Hydra off from rendering pipelines until it will not get more controllable. It is full of less needed features, but still have not support redundant and useful (and memory saving :/) geometry types. Since render delegates themselves declare which type of geometry they can handle - why it is just 5 types of geometry, written during 6 years of development - and with worst approximations even don't produce any correctly (and carefully) subdividing geometry?
To be true, I see not reasons for this critical simplifications. Yes, USD itself is a nice structural and data-providing technology, but Hydra itself introduces more headaches, than solves problems. I cannot came to any production house and say "guys&girls, today we come to use Hydra, but it lacks something, some light types, some geometry primitives, some render gizmos, like clipping geometry or CSG - but it should give us.... WHAt?" Ability to see geometry with same settings and with different renderers? What for? Are not there good and production-ready translators for that renderers? If not - are they really production ready renderers?
I have an experience with OSPRay and Cycles delegates for hydra - first is not a production renderer at all right now, second is possible more adapted to be production ready - but passed through hydra - we have inadequate data to render. I wondered - at Pixar are they render spheres with such excessive data produced (just got from default Maya polysphere geometry) - or they just have no needs with them? Do they kick off analytical surfaces at all in favor of subdivion surfaces? Do all other renderers same? Karma, for example? Yes, Karma should support instancing with good degree of freedom - if we copy a lot of spheres - but it lacks internal defined parametrization on analytical primitives then - which is sometime very useful feature. Moreover - even if you define instancing - rendering of subdivided mesh with 90 faces (AND TWO 10 VALENCE POLES!!!) is not comparable with calculation costs than with trivial sphere, especially with raytracing.
Why are we here after almost 6 years of (public) Hydra development? Ok, Solaris itself is a nice layouting and lightdressing system, but why it should push these elaborate setups to such filter which so dramatically cuts features? If you look at latest renderman - it has all those useful geometry primitives sill available - quadrics, nurbs, metaballs, CGS. But if you will try to pass through Hydra - you've got just very restricted set of available features. Are really they don't use them at all? I see - they eliminated rendering of brickmaps as geometry. But I seem it was one elimination during 25+ of RIB evolution.
Karma look like fully oriented to be Hydra-based. Reasons described above makes me though that it can be less optimal solution, than it could be.
Sphere is not a sphere.
4645 11 1- JOEMI
- Member
- 128 posts
- Joined: July 2005
- Offline
- tamte
- Member
- 8833 posts
- Joined: July 2007
- Offline
I know nothing about how hydra works but also heard about a lot of limitations allegedly caused by hydra
however trying your example I cannot make sphere look faceted in Karma
in the OGL it gets faceted which is common and I'd at least expect the Level of Detail setting to work with it, which doesnt, maybe there would need to be equivalent one as an actual render property for OGL?
but in Karma, no matter how close I get, I see perfect sphere, which I would think it's a parametric sphere
however trying your example I cannot make sphere look faceted in Karma
in the OGL it gets faceted which is common and I'd at least expect the Level of Detail setting to work with it, which doesnt, maybe there would need to be equivalent one as an actual render property for OGL?
but in Karma, no matter how close I get, I see perfect sphere, which I would think it's a parametric sphere
Tomas Slancik
FX Supervisor
Method Studios, NY
FX Supervisor
Method Studios, NY
- jsmack
- Member
- 8043 posts
- Joined: Sept. 2011
- Offline
tamte
but in Karma, no matter how close I get, I see perfect sphere, which I would think it's a parametric sphere
In karma it looks like a partially inflated balloon. You can see the pinching around the poles. It's also more like an oblate spheroid because the control points chosen for the subd surface are on the unit sphere, instead of determined in such a way as to produce a spherical limit surface.
You can get a perfect sphere using the karma sphere procedural though.
Edited by jsmack - Oct. 28, 2021 20:44:59
- antc
- Member
- 338 posts
- Joined: Nov. 2013
- Offline
Depending on the renderer there's several ways to implement a sphere, so a delegate specific adapter is probably what the devs intended. That said it would be nice if HoudiniGL and Storm had some kind of (non-subdiv) adaptive tessellation (not that that would be cheaper).
Hydra is currently getting a major overhaul btw. There's some info on the usd-interest group.
Hydra is currently getting a major overhaul btw. There's some info on the usd-interest group.
Edited by antc - Oct. 28, 2021 21:29:25
- JOEMI
- Member
- 128 posts
- Joined: July 2005
- Offline
1. There are two types of delegates - renderer's an scene delegate, which translate your geometry to hydra - most valuable is USD delegate. It produces such kind of defective geometry, you may find this in sources. Ok, it can be fixed by altering points positions - but it didn't changed amount of memory dramatically - and rendering speed too. Way to use procedurals - is worst possible. What should prevents me write a procedural, which gets USD file and make whole scene procedurally next time - what then Hydra needed for? And what I have to do for OTHER renderers, OpenGL preview etc?
2. Nothing prevents your renderer to render subdivs adaptively - I hope Karma does. But can you imagine, how much memory is wasted during rendering? What amount of rendering time just spent to push/pull excessive data, excessive 10x10 matrix computations required for subdivision - instead of trivial point-to-line measure procedure?
It is not about spheres. It is about - "what for?". We have famous russian tale about "axe porridge". Now hydra looks like an axe. You will spend time setting nicely lightdressed scenes in Solaris - and will wondered, that it can be rendered much faster without Hydra assistance. But you never will know about this.
I seem - Hydra is a part of something other, which mostly hidden from our eyes now. It is very kindly form Pixar r&d teams to share some ideas about integration - but now it looks not like a
Karma is based over Hydra - so it lacks analytical geometry too. If your DCC uses heavily NURBS or T-Splines - like Rhino for example - why it gets impossible to see good results rendered through Hydra even with renderers which are fully support these kinds of geometry? You may insist to convert analythicla geometry to subdiviosion approximations - but it is not possible (and reasonable!!!) every time. Patches are important, trimmed quadrics too. Even hierarchial subdivision meshes, blobbies are outside of these (mesh/subd-points-curves-volumes) primitives and their compositions. Yes, it is not a Houdini story - but then why Hydra is Houdini story if it has such unexplainable limitations? Should renderers vendors really trim possibilities twice - one with Houdini supported capacities (which are not so heavy, if ever exists) - and then more - with Hydra? There are SOHO and ROPs - these layer are much more adaptive and mature for rendering adaptors. Yes, possible they should be modernized. But to be true - I wondered why SESI mess with it and don't separate Karma off Hydra. Renderman itself is separated, and its delegate is not a general and once hub for renderer. May be it is testings and research - possible. These damn sphere primitives was realized in 18.5 as a real geodesic polygons, not a usd primitive - which shows, that SESI developers popssible knew about this issue.
2. Nothing prevents your renderer to render subdivs adaptively - I hope Karma does. But can you imagine, how much memory is wasted during rendering? What amount of rendering time just spent to push/pull excessive data, excessive 10x10 matrix computations required for subdivision - instead of trivial point-to-line measure procedure?
It is not about spheres. It is about - "what for?". We have famous russian tale about "axe porridge". Now hydra looks like an axe. You will spend time setting nicely lightdressed scenes in Solaris - and will wondered, that it can be rendered much faster without Hydra assistance. But you never will know about this.
I seem - Hydra is a part of something other, which mostly hidden from our eyes now. It is very kindly form Pixar r&d teams to share some ideas about integration - but now it looks not like a
Karma is based over Hydra - so it lacks analytical geometry too. If your DCC uses heavily NURBS or T-Splines - like Rhino for example - why it gets impossible to see good results rendered through Hydra even with renderers which are fully support these kinds of geometry? You may insist to convert analythicla geometry to subdiviosion approximations - but it is not possible (and reasonable!!!) every time. Patches are important, trimmed quadrics too. Even hierarchial subdivision meshes, blobbies are outside of these (mesh/subd-points-curves-volumes) primitives and their compositions. Yes, it is not a Houdini story - but then why Hydra is Houdini story if it has such unexplainable limitations? Should renderers vendors really trim possibilities twice - one with Houdini supported capacities (which are not so heavy, if ever exists) - and then more - with Hydra? There are SOHO and ROPs - these layer are much more adaptive and mature for rendering adaptors. Yes, possible they should be modernized. But to be true - I wondered why SESI mess with it and don't separate Karma off Hydra. Renderman itself is separated, and its delegate is not a general and once hub for renderer. May be it is testings and research - possible. These damn sphere primitives was realized in 18.5 as a real geodesic polygons, not a usd primitive - which shows, that SESI developers popssible knew about this issue.
Anything that flies
- antc
- Member
- 338 posts
- Joined: Nov. 2013
- Offline
Sorry I should have been clearer re the adaptive sphere - I was just meaning that in OpenGL it would be nice to have a tessellation shader based approach
Anyway I hear what you're saying, and it is somewhat frustrating, but I'd say the lack of quadrics and NURBS etc is simply the feature animation and vfx industries prioritizing their needs to date. If a cad vendor like Rhino came along and made improvements to NURBS (for example) I'm sure the contributions would be welcomed. Don't forget that one of the reasons for open sourcing was because the scope is too large for Pixar to tackle single handedly.
As far as Hydra goes, its roots are in feeding OpenGL and it currently shows. The early stages of development were largely focused on the (non-trivial) data transformation and buffer management tech needed to fully utilize modern graphics cards. At that time path tracing and final quality production rendering were further down the road. Fast forward to today however and that's very much a priority for many studios and vendors. The current overhaul/rewrite is based on both lessons learned and feedback from the community. That's a healthy thing and personally I think the most important goal is active development, so that the technology keeps moving forward.
The "Why Hydra" question is very valid too. I guess the main draw is a unified rendering experience across DCC's. Personally I think bridge products have a somewhat checkered history and juggling a unique set of bugs and constraints for each DCC isn't a whole lot of fun. Users that are lucky enough to stay in one DCC all day maybe don't care so much. But it's rare that work is generated in a single app these days which is why I think there's so much interest (and active development!).
Anyway I hear what you're saying, and it is somewhat frustrating, but I'd say the lack of quadrics and NURBS etc is simply the feature animation and vfx industries prioritizing their needs to date. If a cad vendor like Rhino came along and made improvements to NURBS (for example) I'm sure the contributions would be welcomed. Don't forget that one of the reasons for open sourcing was because the scope is too large for Pixar to tackle single handedly.
As far as Hydra goes, its roots are in feeding OpenGL and it currently shows. The early stages of development were largely focused on the (non-trivial) data transformation and buffer management tech needed to fully utilize modern graphics cards. At that time path tracing and final quality production rendering were further down the road. Fast forward to today however and that's very much a priority for many studios and vendors. The current overhaul/rewrite is based on both lessons learned and feedback from the community. That's a healthy thing and personally I think the most important goal is active development, so that the technology keeps moving forward.
The "Why Hydra" question is very valid too. I guess the main draw is a unified rendering experience across DCC's. Personally I think bridge products have a somewhat checkered history and juggling a unique set of bugs and constraints for each DCC isn't a whole lot of fun. Users that are lucky enough to stay in one DCC all day maybe don't care so much. But it's rare that work is generated in a single app these days which is why I think there's so much interest (and active development!).
Edited by antc - Oct. 29, 2021 12:36:32
- JOEMI
- Member
- 128 posts
- Joined: July 2005
- Offline
SO, it obsolete before it was born. Ok, then why SESI pay so much attention to it? it is absolutely independent story form usd - and nothing forced them to build newest and modern renderer over such strange and obviously restrictive rendering mechanics. I know about 6 render delegates published, and just one scene delegate - usd delegate. To be true - no more required since usd is really cover most needs as data storage / interpretation and access technology. 18 years ago we developed XML-based group of technologies dedicated to same purposes - produce and transform scenes for renderman - so I was very glad, that we found a lot of similar approaches and concepts in usd later. And now I try to use usd as a data exchange format for a lot of research projects - and even usdviewq widget for analyzing of results (and I spend a lot of time uderstanding why I see skewed geometry, for example, and it was not a dynamic engine failures) - and I was disapointed, that If I decide to push more complex structure to renderer - for example -tetrahedral meshes - I should go to usd sources to implement its adapter - even if I've already implemented such type as usd-scheme. Anyway then I will gen to render not that data I've dealt with in scene.
And last. Hydra is renderer proxy layer. But why we decide, that rendering - is just convert geometry to raster busfers? Wat about vector layers? Is physical simulation - a rendering process too? Why not? Better for Hydra to be analog of XSLT technology, rather than WebGL, then it has practical potential.
Very interesting - since nvidia so wildly uses usd with its omniverse - does it pay so much attention to hydra too?
And last. Hydra is renderer proxy layer. But why we decide, that rendering - is just convert geometry to raster busfers? Wat about vector layers? Is physical simulation - a rendering process too? Why not? Better for Hydra to be analog of XSLT technology, rather than WebGL, then it has practical potential.
Very interesting - since nvidia so wildly uses usd with its omniverse - does it pay so much attention to hydra too?
Anything that flies
- antc
- Member
- 338 posts
- Joined: Nov. 2013
- Offline
I'm not quite sure what you're referring to as "obsolete before it was born" but in any case I don't see any aspect of the tech that's obsolete at this point. The new architecture will of course replace some of the current interfaces, making them legacy and eventually obsolete. But in my opinion that's the whole point of evolving technology and moving it forward. If not what's the better alternative?
- jason_iversen
- Member
- 12670 posts
- Joined: July 2005
- Offline
Yes, something under such active development can't really be called 'obsolete'. It's probably true to say the initial offering was more geared towards realtime rendering and made some assumptions that requires some compromises and workarounds, but those are being addressed in 2.0 proposal. We've seen some of that work appearing in USD 21.11 [github.com] already. There is a moderate amount of churn involved in maintaining a hydra delegate, but Pixar plan to support backwardly-compatible interfaces over a version or two, giving developers some time to adapt to the newer API.
As for alternatives, I'm only aware of OpenNSI [documentation.3delightcloud.com] in this space, which has a good reputation among developers and 3delight users but narrow industry adoption, as far as I can tell. We won't mention RIB [renderman.pixar.com] here.
As for alternatives, I'm only aware of OpenNSI [documentation.3delightcloud.com] in this space, which has a good reputation among developers and 3delight users but narrow industry adoption, as far as I can tell. We won't mention RIB [renderman.pixar.com] here.
Edited by jason_iversen - Oct. 30, 2021 14:24:48
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
also, http://www.odforce.net [www.odforce.net]
- JOEMI
- Member
- 128 posts
- Joined: July 2005
- Offline
You are all right, that it is still developing. But it is developing sensible big amount of time - and doesn't demonstrate still convincing "reasons d'etre".
It is timing about two months of qualified developer to implement renderer adon, which can translate USD data to renderer's API calls. With much more wider range of features supported. All renderer's API's (production used) have similar patterns and subjects of processing. If you need task-oriented extending - ok, you just implement it too, as you will do it for renderer delegate. OpenNSI demonstrates some modern approach and don't offer itself to be "common biggest divider".
I seem Hydra is strange branch of rib-filtering idea, but we used rifs for extending content interpretation, instead of cutting it dramatically.
Very interesting - when Pixar themselves get usd file for rendering - do they really use Hydra? They offer such feature. Do they really cut everything not fitted to hydra pipeline from THEIR renderer? And what customers said about this situation?
All noise, I make is not about defective geometry - I hope it will fixed sometimes - (but H19 will render for you squashed 90-faced eggs instead of spheres while) - now I prepare for possible whole production pipeline and I should explain why "we can/cannot use such modern and fast renderer". And now I faced that I cannot said any calming words. Should we thought about Solaris workflow of go pray for Katana (yes, it used hydra for preview, not more). Or just go and spend two months for translator. (Tamerlane forces his sons "learn languages of provinces you rule - then translators could not lie you".) What else and in what worst of possible moment I will found, that something always known as effectively and robust working - will "just not implemented still"?
I mentioned NVIDIA before - I seem it explains disbalance with USD and Hydra developing - just because NVIDIA uses USD and forces it's development - and they were not interesting with Hydra per se - RTX driver uses it - but they changed sources, so it is incompatible with main developers branch.
Let there be Hydra ZOO! Why not another for SESI too? I can develop another one too, if someone ask.
May be it is simpler just to feed one head we needed then?
(Very interesting, do they have squashed spheres in omniverse marbling demo, or didn't noticed such fact? Or they see this and didn't warn anyone and just threw this primitive from using?)
It is timing about two months of qualified developer to implement renderer adon, which can translate USD data to renderer's API calls. With much more wider range of features supported. All renderer's API's (production used) have similar patterns and subjects of processing. If you need task-oriented extending - ok, you just implement it too, as you will do it for renderer delegate. OpenNSI demonstrates some modern approach and don't offer itself to be "common biggest divider".
I seem Hydra is strange branch of rib-filtering idea, but we used rifs for extending content interpretation, instead of cutting it dramatically.
Very interesting - when Pixar themselves get usd file for rendering - do they really use Hydra? They offer such feature. Do they really cut everything not fitted to hydra pipeline from THEIR renderer? And what customers said about this situation?
All noise, I make is not about defective geometry - I hope it will fixed sometimes - (but H19 will render for you squashed 90-faced eggs instead of spheres while) - now I prepare for possible whole production pipeline and I should explain why "we can/cannot use such modern and fast renderer". And now I faced that I cannot said any calming words. Should we thought about Solaris workflow of go pray for Katana (yes, it used hydra for preview, not more). Or just go and spend two months for translator. (Tamerlane forces his sons "learn languages of provinces you rule - then translators could not lie you".) What else and in what worst of possible moment I will found, that something always known as effectively and robust working - will "just not implemented still"?
I mentioned NVIDIA before - I seem it explains disbalance with USD and Hydra developing - just because NVIDIA uses USD and forces it's development - and they were not interesting with Hydra per se - RTX driver uses it - but they changed sources, so it is incompatible with main developers branch.
Let there be Hydra ZOO! Why not another for SESI too? I can develop another one too, if someone ask.
May be it is simpler just to feed one head we needed then?
(Very interesting, do they have squashed spheres in omniverse marbling demo, or didn't noticed such fact? Or they see this and didn't warn anyone and just threw this primitive from using?)
Edited by JOEMI - Nov. 1, 2021 02:34:37
Anything that flies
- JOEMI
- Member
- 128 posts
- Joined: July 2005
- Offline
- jerry7
- Member
- 651 posts
- Joined: Nov. 2013
- Offline
-
- Quick Links