Intel Dual/Quad Core and Houdini

   29986   28   5
User Avatar
Member
4140 posts
Joined: July 2005
Offline
Sorry for not keeping up with cpu technology *every freakin' day* , but I'm just trying to make some decisions about hardware. I'd like to go Intel, but there's dual core, quad core, now apparently “intel core 2 Dual” - hell, I'm lost.

Is there a payoff going with a particular flavour for mantra(render node) and workstation(houdini/mantra)? I get the feeling that with H8.2 mantra speedups won't be that significant, but I'll see something with Houdini (with all those child processes being spawned). This would mean one processor would be better for a farm and another for a workstation.

Any thoughts appreciated. I've decided to not go AMD for now. We're all Linux here, with NVidia graphics cards.

Cheers,

J.C.
John Coldrick
User Avatar
Member
1192 posts
Joined: July 2005
Offline
Intel Core 2 Duo is generally targeted to desktop PCs. That is, it's a dual core processor which you buy for your gaming system. There is also a quad core variant. Which is roughly equivalent to having 4 processors.
For a workstation there are the Intel Xeons, which again are available in dual core (5100 series) and quad core (5300 series) configurations, but are also capable of multiprocessing. I just got a Supermicro workstation with 2 Intel dual core Xeons (that means 4 cores in total). It runs very well. I can't test Mantra under apprentice for speed with multi-host. Of course, you can get two quad core Xeons and have 8 cores, the equivalent of 8 processors in your workstation. Probably Digital Fusion would run very happily on such a system

Dragos
Dragos Stefan
producer + director @ www.dsg.ro
www.dragosstefan.ro
User Avatar
Member
483 posts
Joined: July 2005
Offline
Here's some information that might help you out:

Core 2 is the new architecture. (They started with Core, but that was only for laptops.) So you can have a Core 2 Solo, Core 2 Duo, and Core 2 Quadro – the last word is indicative of the number of cores.

Core 2 is an awesome microarchitecture, and currently is much faster than AMD's offerings. (This won't stay the case for long, which is a great thing for us users - more competition is more better.)

As for quadro vs duo - Quadro is going to have a bit less memory bandwidth because of the limited number of pins going to the motherboard, but will be plenty powerful. It's also incredibly expensive. Currently the optimal price point is probably having one or two dual-core processors (which can later be upgraded to quad-core processors if the need arises).
User Avatar
Member
519 posts
Joined:
Offline
Maybe this link helps a bit (or probably gets you more confused)

http//techreport.com/reviews/2007q1/cpus/index.x?pg=1

It's another cpu comparison. Seems that for rendering the AMD processors do very well, for other stuff Intel has the edge. Core 2 Duo's are the latest desktop processors, Xeons are for dual cpu systems (4 cores or 2x 4 cores or 3x 3,33 cores …I dunno, it *IS* confusing these days ?).
User Avatar
Member
4140 posts
Joined: July 2005
Offline
Thanks for the thoughts, guys. My favourite part of those review pages is the “Conclusions” page.

Cheers,

J.C.
John Coldrick
User Avatar
Member
321 posts
Joined: July 2005
Offline
Joe
As for quadro vs duo - Quadro is going to have a bit less memory bandwidth because of the limited number of pins going to the motherboard, but will be plenty powerful. It's also incredibly expensive. Currently the optimal price point is probably having one or two dual-core processors (which can later be upgraded to quad-core processors if the need arises).
Have you checked out these monsters? 16 cores in 1U
http://www.supermicro.com/products/nfo/1UTwin.cfm [supermicro.com]
Essentially 2 computers in one, each half has 2 sockets that each support a quad-core chip, 1333FSB's for each socket and up to 16Gb of ram. A fully populated, 16-core 2.6Ghz 32Gb machine is somewhere around US$13k. Per core, that seems pretty decent.
Antoine Durr
Floq FX
antoine@floqfx.com
_________________
User Avatar
Member
4140 posts
Joined: July 2005
Offline
I'm not sure I trust the estimated vs real-world results of the quad-core cpus. One thing I do know, especially when you're talking rendering, one core doth not equal one cpu - there's some serious bottlenecks in there.

Add to that you're always paying top dollar for the latest/greatest, to say nothing of potential new-tech issues, it's the sort of thing I tend to avoid myself.

It's cool and all, I might love a desktop based around it if I was making too much money , but I like a value-based solution over a sexy one. Has to do with how I was brought up(trudging 10 miles to school every day in a summer snow storm, etc.)

Cheers,

J.C.
John Coldrick
User Avatar
Member
519 posts
Joined:
Offline
and I was just about to post this link http//www.tyanpsc.com/

Blame it on Antoine, he started posting links to computers to drool over…

About rendering, I think you are right about dual cores vs. more cpu's when it comes to rendering. Two computers each with one cpu is probably faster than a dual cpu system (read that some time ago in an article somewhere on the net, the conclusion page only wink ) In the article I posted you can see that the athlons do very well when it comes to rendering. AMD still has the fastest access to memory which is probably what you want with rendering. So memory bandwidth is a main factor when it comes to speed.

About buying new computer “stuff”, I think your value-based approach makes perfect sense.
User Avatar
Member
321 posts
Joined: July 2005
Offline
Pagefan
About buying new computer “stuff”, I think your value-based approach makes perfect sense.
I don't have enough experience selecting hardware to know what issues I'll hit. It also depends on whether you're going for fastest cpu throughput at any cost, or total processing power per amp and btu over the next 18 months. And then when you factor in per/host vs. per/thread software licensing costs, it all gets *really* confusing!
Antoine Durr
Floq FX
antoine@floqfx.com
_________________
User Avatar
Staff
5285 posts
Joined: July 2005
Offline
Quad cores are great where you need instant results (like IPR for a lighter). You can do single cores for render machines, but they're harder to find these days, especially with Intel (you'd likely get an old Pentium4).

Multi-cores give you more CPUs per platform, so hardware maintenance is a little easier (a half or quarter as many systems to administer for the same CPU count) and power draw is lower (even at idle, motherboards draw a base amount, usually somewhere around 100W-150W; this doesn't change much with a dual or quad core). On the other hand, you need to load up on memory to support the dual or quad cores, and it's hard to find cheaper motherboards that allow you to stack up more than 4GB.

There is good news coming up for CPUs though - when AMD's Barcelona quad core debuts (May?), Intel has planned to slash prices on its quad cores by 50% (rumors of a 2.4GHz quad core selling for $250USD). At that price, you can almost afford the fact that you're not getting a true 4x speedup.

Also, Intel is debuting several 1333MHz FSB dual core CPUs, and 1333FSB quad cores won't be far behind (currently, Intel CPUs are using a max of 1066FSB). This will improve memory bandwidth a bit, offsetting the multi-CPU-on-a-die penalty.

Hope that helps a bit…
User Avatar
Member
321 posts
Joined: July 2005
Offline
twod
Multi-cores give you more CPUs per platform, so hardware maintenance is a little easier (a half or quarter as many systems to administer for the same CPU count) and power draw is lower (even at idle, motherboards draw a base amount, usually somewhere around 100W-150W; this doesn't change much with a dual or quad core).
Yeah, the power draw (and associated A/C cost) per core is pretty high. My spreadsheet says that 8 dual socket single core machines running for 14 months cost the same as a new 1U 16cpu box.
twod
There is good news coming up for CPUs though - when AMD's Barcelona quad core debuts (May?), Intel has planned to slash prices on its quad cores by 50% (rumors of a 2.4GHz quad core selling for $250USD). At that price, you can almost afford the fact that you're not getting a true 4x speedup.

Also, Intel is debuting several 1333MHz FSB dual core CPUs, and 1333FSB quad cores won't be far behind (currently, Intel CPUs are using a max of 1066FSB). This will improve memory bandwidth a bit, offsetting the multi-CPU-on-a-die penalty.

Hope that helps a bit…
I thought that the 5160 was a dual core 3 ghz 1333FSB chip. I know the quad-cores are 1333FSB (though 1066 is also available, iirc).
Antoine Durr
Floq FX
antoine@floqfx.com
_________________
User Avatar
Staff
5285 posts
Joined: July 2005
Offline
I thought that the 5160 was a dual core 3 ghz 1333FSB chip. I know the quad-cores are 1333FSB (though 1066 is also available, iirc).

Ah, I was looking at the consumer versions (E6600, Q6700, etc), not the Xeons. I don't see a huge advantage in going with the Xeons, other than you can get them running at 1333FSB right now, and, if you want to use a 2-socket MB, you'll need to use a Xeon. In the past the Xeons were clear winners over their consumer counterparts, now the situation is a little more muddy.

The Xeon platform requires FB-DIMMs, which unfortunately aren't a great memory technology - they're sort of a stop-gap solution until memory density increases again (ie, good luck finding a 2GB DIMM – not a DIMM-kit 2x1GB). In order to get good memory bandwidth, you need to use at least 4 FB-DIMMs, but as you increase the number of FB-DIMMs in your system, your memory latency unfortunately goes up. It also runs hot because of the 3GHz chip that does the serial-to-parallel conversion, and it's fairly expensive (~50% more than equiv. DDR2). In benchmarks I've seen, this memory technology holds back the 1333FSB.

DDR2, on the other hand, is cheaper and generally faster. Its main limitation is that it is hard to stock a machine with more than 4GB because most motherboards only offer 4 slots. Of course, you can go to 8GB with FB-DIMMs, but you'll need 8 1GB FB-DIMMs, and your memory latency will be quite long (such that DDR2 will now beat it hands down). But, unless you're going 64bit, 4GB is okay for now. Also, you don't get ECC with most DDR2 consumer motherboards.

EDIT:
Actually, you *can* get 2GB DDR2 modules, though they are rare and expensive:

http://www.bytewizecomputers.com/products/7/10/382/13895 [bytewizecomputers.com]

8GB of DD2-800 would cost almost $1300, vs. 8GB of FB-DIMM-667 @ $138x8 = $1104 (both Kingston). But for that ~$200 difference, you'd get much better memory performance with DDR2.
Edited by - March 25, 2007 18:26:18
User Avatar
Member
194 posts
Joined:
Offline
I like anandtech.com for reviews. Core 2 Duo E6400 and E6600 seem to be the best deal. A good psu, heatsink, mobo and good memory will give you 50+% overclocking ability right off the bat. E.g. E6400 goes from 2.13Ghz to 3.2 GHz without a hitch.
User Avatar
Staff
5285 posts
Joined: July 2005
Offline
will give you 50+% overclocking ability right off the bat.

True, but you'll be shortening the lifetime of the components, and risking stability. I'd have no problems doing that for my gaming rig to play Doom3 at 200fps, but I'd think twice about overclocking for a workstation. Downtime is *way* more expensive
User Avatar
Member
194 posts
Joined:
Offline
Components become obsolete so quickly these days that it's hard to say whether you really shorten the life of a system, which you'll want to replace in 6 months. Of course, if that's your only system (or your workhorse system) and you use it for paid work, as opposed to testing how much faster renders are, it doesn't make sense to push it.

twod
will give you 50+% overclocking ability right off the bat.

True, but you'll be shortening the lifetime of the components, and risking stability. I'd have no problems doing that for my gaming rig to play Doom3 at 200fps, but I'd think twice about overclocking for a workstation. Downtime is *way* more expensive
User Avatar
Member
4140 posts
Joined: July 2005
Offline
Components become obsolete so quickly these days that it's hard to say whether you really shorten the life of a system, which you'll want to replace in 6 months.

:!:

And where can I go where they replace their workstations twice a year?

I'm with twod - overclocking is for games. I want reliability at work, even if the speed is less.

Cheers,

J.C.
John Coldrick
User Avatar
Member
194 posts
Joined:
Offline
I know some artists who get a new workstation even more often. They're probably in the minority, dunno, because i'm not earning my living with 3D yet.

JColdrick
Components become obsolete so quickly these days that it's hard to say whether you really shorten the life of a system, which you'll want to replace in 6 months.

:!:

And where can I go where they replace their workstations twice a year?

I'm with twod - overclocking is for games. I want reliability at work, even if the speed is less.

Cheers,

J.C.
User Avatar
Member
321 posts
Joined: July 2005
Offline
JColdrick
Components become obsolete so quickly these days that it's hard to say whether you really shorten the life of a system, which you'll want to replace in 6 months.

:!:

And where can I go where they replace their workstations twice a year?
More like once every two years! And even then, if it's a rack mounted computer, you keep it in the farm until the new ones are substantially more than 2x faster.
Antoine Durr
Floq FX
antoine@floqfx.com
_________________
User Avatar
Member
321 posts
Joined: July 2005
Offline
twod
Ah, I was looking at the consumer versions (E6600, Q6700, etc), not the Xeons. I don't see a huge advantage in going with the Xeons, other than you can get them running at 1333FSB right now, and, if you want to use a 2-socket MB, you'll need to use a Xeon. In the past the Xeons were clear winners over their consumer counterparts, now the situation is a little more muddy.

AFAIK, for 1U rack mounted computers, Xeons are the only thing available. I have no idea about deskside workstations.

twod
The Xeon platform requires FB-DIMMs, which unfortunately aren't a great memory technology - they're sort of a stop-gap solution until memory density increases again (ie, good luck finding a 2GB DIMM – not a DIMM-kit 2x1GB). In order to get good memory bandwidth, you need to use at least 4 FB-DIMMs, but as you increase the number of FB-DIMMs in your system, your memory latency unfortunately goes up. It also runs hot because of the 3GHz chip that does the serial-to-parallel conversion, and it's fairly expensive (~50% more than equiv. DDR2). In benchmarks I've seen, this memory technology holds back the 1333FSB.

DDR2, on the other hand, is cheaper and generally faster. Its main limitation is that it is hard to stock a machine with more than 4GB because most motherboards only offer 4 slots. Of course, you can go to 8GB with FB-DIMMs, but you'll need 8 1GB FB-DIMMs, and your memory latency will be quite long (such that DDR2 will now beat it hands down). But, unless you're going 64bit, 4GB is okay for now. Also, you don't get ECC with most DDR2 consumer motherboards.
Thanks for the insight. I'm pretty new to this, so all this kind of info is great. For the work I've been doing, 2 gigs/core is pretty much the minimum. I'm running dual socket dual cores with 4 gigs, and I top out regularly, and I'm not doing anything that complex (yet). So an 8 gig/system limitation's not a good thing.

All this talk about memory access seems to reinforce the usefulness of the AMD chips.
Antoine Durr
Floq FX
antoine@floqfx.com
_________________
User Avatar
Staff
5285 posts
Joined: July 2005
Offline
All this talk about memory access seems to reinforce the usefulness of the AMD chips.

Well, AMD chips have the same problem as Intel when it comes to memory capacity - the parallel DDR2 interface can't be extended to more than 4 slots due to signaling issues (it was even 3 for a while). FB-DIMMs use a serial interface, which allows more memory slots, which IMO is their only advantage.

One important point about AMD memory access that is completely hidden - AMD CPUs with odd multipliers actually underclock your memory. To avoid this, buy AMD CPUs with speeds divisible by 400MHz (2.0, 2.4, 2.8 ).

The memory interface in onboard the AMD CPU, which means it runs at the same speed as the CPU. It uses a memory divider to get the memory speed. The memory dividers must always be whole numbers, and they choose a divider such that the memory speed is never exceeded. So, for DDR2-800, you get:
CPU Speed Mem Div Mem Speed
3800+ 2.0GHz 5 400 (DDR2 800)
4000+ 2.1GHz 6 350 (DDR2 700) -12.5%
4200+ 2.2GHz 6 366 (DDR2 733) -9%
4400+ 2.3GHz 6 383 (DDR2 766) -4%
4600+ 2.4GHz 6 400 (DDR2 800)
4800+ 2.5GHz 7 357 (DDR2 714) -11%
5000+ 2.6GHz 7 371 (DDR2 742) -7%
5400+ 2.8GHz 7 400 (DDR2-800)

So, the situation with AMD is far from clear However, AMD CPUs generally have excellent memory latency & bandwidth due to their integrated memory controller, so memory speed isn't the entire picture. But it does point to the 4600+, 5400+, or Opt 1220 as being the balance point between CPU and memory speed.

Edit: changed 2.8) to 2.8 ). Stupid smileys!
  • Quick Links