OSX Graphics Update and ATI Hardware

   4498   7   0
User Avatar
スタッフ
5212 posts
Joined: 7月 2005
Offline
If you are using OSX with an ATI card, do not install the Graphics Update released today. It causes problems with wireframe drawing and High Quality Lighting no longer works. If you have already installed it, you can work around the issues by setting the environment variable HOUDINI_OGL_DISABLE_FBO, though this will disable many of the viewport effects (though it will display properly using normal quality).

We're looking into the problem in the meantime.

Edit: If you have Nvidia graphics, this update provides decent performance improvements and none of the issues listed above, so it is recommended that you install it.
User Avatar
スタッフ
5212 posts
Joined: 7月 2005
Offline
I have worked around the problem, which appears to be an issue with using textures generated by rendering to FBOs. The solution was to read these textures back into memory before using them, which should be completely unnecessary. I will notify Apple of this issue.

In the meantime, Houdini 11.0.479 will work with this graphics update, though Scene AA + HQ Lighting appears to be overly bright. I still don't recommend it unless you absolutely need this update for another application.
User Avatar
Member
191 posts
Joined: 8月 2008
Offline
So, do you wait for a solution until you get feedback from apple and doesn't take that a long time?
User Avatar
スタッフ
5212 posts
Joined: 7月 2005
Offline
I either work around the problem by re-coding shaders or Houdini, or disable the problem-causing feature. The former generally results in either performance loss or a higher maintenance cost for us (more codepaths to maintain), so it's only done for features that you generally can't work without.

In this particular case, a workaround with a performance hit was implemented since the number of features disabled would have been prohibitively high.

The issue is also reported to Apple so that they can fix the problem in the next graphics update (or so), allowing us to only apply the workaround for specific driver versions.

This isn't been limited to just Apple; we've recently had to do this with the Nvidia 256 series drivers and also with older ATI drivers. Ideally, OpenGL drivers would just work, but with OpenGL's rapid development lately there have been a few hiccups.
User Avatar
Member
50 posts
Joined: 9月 2006
Offline
This reminds me of a question I want to ask for some time. Why does it seem to be so difficult to write a viewport renderer that is fast and stable and looks good relative to current day's 3D engines, while there are so many games today thats looks very good and are very fast, and also doesn't seem to suffer from all those bugs all the time.
For example, H11 is running on my macbook pro rather slowly, and if I enable the new higher quality settings, it basically doesn't work, or works very slowly, but 3D games runs perfectly fine. I haven't tried other 3d apps, so I don't know their status.
Of course I only ask this question from a technical interest, not to try and compare or upset anyone.

Thanks.
User Avatar
スタッフ
5212 posts
Joined: 7月 2005
Offline
No offense taken It's a bit of a complex answer, but in a nutshell it's a bit like comparing apples to oranges.

First, Houdini's viewport can certainly use more optimization and we're working on that. From what I've heard, Maya is doing the same thing.

Second, HQ Lighting is not intended for general modeling. It's a very accurate shading mode meant to emulate the results from mantra, at the expense of fluid interactivity. It allows you to get several updates per second, as opposed to the several seconds per update that IPR provides. It is useful when setting up lighting and doing flipbooks. It also requires a substantial graphics card to back it up, like a high-end consumer or a mid- or higher pro card.

The heart of the matter though is the basic difference between a game and a viewport. I'm not an expert on game engines/development, so please take with a grain of salt. Assets in a game are strictly budgeted in terms of polygon count and texture size, and models are highly optimized in terms of triangle strips (which you can do with the Tristrip SOP). On the other hand, content creation apps can't realistically limit your models, textures nor expect you to spent time optimizing them for display speed. Optimization is often a time consuming process, so it isn't very appropriate for modeling where the model is continually changing.

Viewports also use a lot of drawing styles that games do not. Games are strictly triangle-based - they rarely (if ever) draw lines. A viewport generally uses a lot of lines - for handles, guides, outlines, etc. Many consumer cards don't support HW-accelerated line drawing.

The other thing that games generally do is to load all the models, textures and environment onto the GPU, to minimize PCI-Express traffic from system memory to VRAM. For wide-open exploration-type games, you can see the effect of object load-in as slight lags in FPS as objects appear. Conversely, when a user is editing a model, the model (or parts of it) are continually being set to the GPU. This is one of the major bottlenecks of any modeling program (and one we're looking at closely).

Finally, a level in a game is also highly optimized with LOD and tree-accelerated structures that determine object visibility. It takes a lot of time to generate these structures, which wouldn't be feasible in an environment where anything can change at any time.

One thing I should point out is that developing a game engine is very hard as well; often multiple games share the same engine because of the effort and cost (ie, Unreal, CryTech, Quake4, etc). Drivers are continually being patched for bugs and performance improvements, so it isn't like these hiccups aren't occurring in the game space as well. The release notes along with a driver update are often chock-full of fixes for games.

OpenGL performance on OSX is improving, especially with the new graphics update. Previously it was only about half the performance of Windows, now it's almost 90% of the Windows performance (at least in a Valve benchmark test I saw for Team Fortress 2). Hopefully they will continue with implementing performance-enhancing OpenGL extensions as well. OpenGL 3.3 and 4.0/1 have implemented a lot of great features which should help workstation-class applications in terms of performance.

Again, we're continuing to work on the viewport, especially in terms of performance and there may be some techniques we can ‘borrow’ from the game engine. Hope that helps a bit!
Edited by - 2010年8月21日 12:50:16
User Avatar
Member
50 posts
Joined: 9月 2006
Offline
Thanks, this is really interesting. I also guess that the amount of people developing a good game-engine is much bigger then the amount developing the a viewport renderer, which makes sense.

I didn't know about the line drawing issue, its a bit funny. So a lot of the problems comes from bugs in the OpenGL implementation that game engines just don't run into generally?
User Avatar
スタッフ
5212 posts
Joined: 7月 2005
Offline
Well, a lot of games are DirectX-based as well, limiting their platform to Windows and also a rigid feature set (DX9, DX10, DX11). I'm sure Microsoft is pretty heavily testing any given DirectX driver implementation (WHQL).

OpenGL is more open in that a core version can be requested (like 3.2) and then extensions for features present in higher core versions can be used (along with those not in any core version). Currently Apple is at OpenGL 2.1 (almost 3.0, just missing GLSL 1.30), as they write their own OpenGL implementation. On other platforms, the OpenGL drivers are written by Nvidia (very good), ATI (pretty good) and Intel (terrible - don't even try) and Nvidia/ATI are up to OpenGL version 4.1 (latest).

But generally, yes, workstation applications use a lot of features games don't. Line drawing, with stippled or thick lines, geometry picking, multiple views, are a few that come to mind. Some consumer cards just aren't good at doing these things, which is why we recommend Pro cards. Unfortunately, OSX has a big lack of pro hardware right now (I think the only one is the Quadro FX 4800). Hopefully their consumer drivers will either support these features well, or they'll start offering pro hardware again.
  • Quick Links