An exclusive gaming industry community targeted
to, and designed for Professionals, Businesses
and Students in the sectors and industries
of Gaming, New Media and the Web, all closely
related with it's Business and Industry.
A Rich content driven service including articles,
contributed discussion, news, reviews, networking, downloads,
and debate.
We strive to cater for cultural influencers,
technology decision makers, early adopters and business leaders in the gaming industry.
A medium to share your or contribute your ideas,
experiences, questions and point of view or network
with other colleagues here at iVirtua Community.
Just forked out for a new rig with a fastprocessor on board? Then Nvidia has some very bad news for you. Your PCis "obscenely" imbalanced thanks to an overpriced, underperforming CPU- probably courtesy of Intel.
It's just thelatest salvo in the burgeoning war of words between Nvidia and Intelthis year. But what exactly is Nvidia getting at? Talking to TechRadarearlier this week, Nvidia's VP of Content Relations Roy Taylor outlineda developing strategy for leveraging Nvidia graphics technology toaccelerate a wide range of PC applications. Very soon, the world willdiscover just how pathetic conventional CPUs really are.
IfTaylor is correct, the initiative will deliver a massive, unprecedentedboost in PC performance. We're not talking the 2x or 3x boosts inperformance that the PC industry delivers on a regular basis. It couldpromise as much as 20x or even 100x the performance of todaysmulti-core CPUs. Yikes.
CUDA cometh Thebasic premise is the use of Nvidia's CUDA programming platform (itselfclosely related to the C programming language) to unlock theincreasingly programmable architecture of the latest graphics chips.
Onpaper, it's extremely plausible. In terms of raw parallel computepower, 3D chips put CPUs to shame. A good recent example is the newroom-sized, high density computing cluster installed by ReadingUniversity.
Designed to tackle the impossiblycomplex task of climate modelling, it weighs in at no less than 20TeraFlops. That sounds impressive until you realise that just a singleexample of Nvidia's next big GPU, due this summer, could deliver asmuch 1TFlop. So, a few four-way Nvidia GPU nodes will soon offer thesame raw compute power as a supercomputer built using scores ofCPU-based racks.
General purpose GPU Alittle bit closer to home, one of the early applications Nvidia ispromoting as a demonstration of the general purpose prowess of its GPUsis a video encoding application known as Elemental HD.
Downsizinga typical HD movie for an iPod using a conventional PC processor cantake up to eight hours or more, even with a decent dual-core Intelchip. Nvidia says the same job can be done in just over 20 minutes onan 8800 series Nvidia graphics board.
"Whenyou look at the question of whether you should transcode video on a GPUor CPU, when you consider it in performance-per-buck terms, it'scurrently obscenely the wrong way round," Taylor says.
Andthe solution is simple enough. Don't spend any more money overall. Justspend a little less money on your Intel CPU and a little more on yourNvidia GPU.
Hardware PhysX What'smore, Taylor says plans to support the recently acquired PhysXphysics-simulation engine on Nvidia's GPUs are also nearing launch.Before the end of May, a total of eight games with GPU-based PhysX aredue to announced. 30 to 40 such titles will be available this time nextyear.
So, that's it then. The game is up forthe CPU and Intel alike? Not so fast. For starters, there's a goodreason why CPUs don't deliver the raw compute power of contemporaryGPUs.
CPU cores are big, complex beasts,designed to turn their hands to almost any task and make a decent fistof it while not excelling in any one area. GPUs, even the most recentand programmable examples, are still a lot less flexible. When they'regood, they're great. When they're not, well, they simply won't do thejob at all.
"At the moment general-purposeGPU applications are admittedly very high end. But increasingly peopleare asking why are scientific research industries including medicineand climate modelling are using GPUs," Taylor says.
Theanswer is the unbeatable bang-for-buck performance ratio that GPUsdeliver. Taylor reckons Nvidia has a large number of partners withconsumer-level applications lining up to key into its GPU technology.Several are due to be revealed later this summer.
Waiting game Untilthen, however, it's impossible to say whether the benefits will be asspectacular as Nvidia claims. Likewise, we'll have to wait and see justhow smoothly it all works. The only non-3D consumer application forGPUs that has been widely tested on the market so far is video decodeassist. And that has been a distinctly hit and miss affair.
Buteven if Nvidia can deliver reliable, transparent hardware accelerationfor a wide range of applications with its GPUs, it will still have ahuge fight on its hands from Intel.
Intel'sintriguing new GPU, known as Larrabee, is due out in late 2009 or early2010. Apart from the fact that it will be based on an array of cut-downX86 processor cores, little is known about its detailed architecture.But as Intel's first serious effort to compete in the GPU market, it'sa game-changing product.
For Taylor, ofcourse, the Larrabee project merely confirms that the GPU is where theaction is. “Why does Larrabee exist? Why is Intel coming for us?They're coming for us because they can see the performance advantage ofour GPUs,” Taylor says.
He's probably right. It will be a fascinating contest.
I find this conflict funny. First of all, yes, Nvidia is right about how Intel CPUs are overpriced, but Nvidia is expensive as well and I don't think they know what they're dealing with. First of all, the GPU is nothing without the CPU. When they say people are spending too much on processors, that is true but they need to realize that the CPU is actually good - it works well and efficiently. Obviously people need to focus on graphics cards because they aren't nearly as powerful as the CPU, so Nvidia just made themselves look REALLY bad for nothing. The whole PhysX thing is stupid too - thats what dual cores are for. Lets put the difficult, non-graphic related processes to something that actually has the capacity. Nvidia is acting more desperate than Intel and they have no reason to.