TweakTown
Tech content trusted by users in North America and around the world
6,143 Reviews & Articles | 39,488 News Posts
Weekly Giveaway: Win an Antec Case, PSU and Cooler (Global Entry!)

AMD vs. NVIDIA - Are they even playing the same game?

We all have our favorites in the GPU and CPU world, but when we compare them do we know what we are looking at?

| Editorials in Video Cards | Posted: Nov 27, 2009 12:34 pm

AMD vs. NVIDIA - Are they even playing the same game?

 

In the GPU world there are many flavors, despite the perception that you either get AMD or NVIDIA. There is Matrox, WildCat/3D Labs, S3, VIA and a few more. But in the end what you hear the most about are NVIDIA and AMD. These two companies are compared back and forth in a game of favorites that would make a selfish 4 year old proud. Still, can you really compare them in equal terms?

 

On the surface you can. After all, you are measuring the number of frames per second the GPU (attached to a specific build of materials) can render. The magic number you are looking for is between 28 and 32 frames per second. This is the number of frames per second that will "fool" the human eye into thinking you are seeing full, fluid motion.

 

Now, there are ways to compensate for lower frame rates, but that is not what we are talking about here. No, what we are looking at today is more of a fundamental look at how each of these GPUs gets you to that magical number. This will not be a deep technical "white paper", but more of an everyday reference and a look at where each brand differs in implementation and execution.

 

- A Shader is a Shader

 

One of the things that both NVIDIA and AMD like to tell you about is the number of shaders they have. You will hear these numbers as an indication of how many parallel operations each GPU can handle. For the purposes of our discussion here we will talk about the current best single GPU card from each; the AMD Radeon 5870 and the NVIDIA GTX 285. AMD boasts a staggering 1600 shaders (AMD calls them Stream Processors), while the GTX 285 only has a comparatively small 240.

 

Easy to see who wins this game, right? - Well, you would be wrong. The reason is that AMD uses a cluster or node style of shaders called the Vec5D. In the 5870 there are 320 of these nodes; each has five shader units inside, making up the 1600 total that is listed. Four are lite and are capable of handling simple instructions and commands. The fifth is a fat and is capable of handling complex instructions easily. NVIDIA's shaders are multi-purpose and can handle either style of instruction. But wait, there is more. AMD's Stream Processors only operate at the core clock of the GPU. This means that the shaders are running at 825MHz compared to the 1475MHz on the GTX 285.

 

TweakTown image content/3/0/3029_10.jpg

 

TweakTown image content/3/0/3029_11.png

 

Now, what does all this have to do with your gaming? - Well, it comes into play when you consider how game engines are coded. If the game code is all small and simple instructions, then the AMD GPU has a very large upper hand, even considering the faster speed of the NV shaders. If the game code is in complex and bulky blocks then AMD only has 320 stream processors that can execute that code and then at a significantly slower speed. This problem has come to light more and more in the world of GPGPU computing, but is also starting to show up in gaming situations.

 

The issue is further complicated as game developers try to push more parts of the game through the massively parallel GPU. It is great that the GPU is able to handle this and that there are now tool kits and standards that will help developers, but again, it is one of implementation.Remember, NVIDIA has had their GPGPU code base (CUDA) for almost three years. They have moved their method of designing their GPUs to support this code (and have added in OpenCL support through recent driver updates).

 

AMD for their part has had the same GPU design going back to the 29xx series GPUs, but has not had the money or resources to invest into the gaming community like NVIDIA has. This has led to many games being more optimized for the way NVIDIA GPUs process code. Yes, there are games that AMD (and ATI before them) invested heavily in, but due to a lack of money these really are few and far between. Does this mean that NVIDIA is controlling the market unfairly? Maybe, but again, that is not the point of this article.

 

TweakTown image content/3/0/3029_12.jpg

 

If you think back to the days of the X19xx series, ATI and AMD were talking about putting physics onto the GPU. At the time they claimed that by doing this they could show a greater increase in performance and remove the need for the fledgling PPU (Physics Processing Unit) put out by AGEIA. Unfortunately they never did anything with it. The thought died with that one single showing of Havok Physics running on an ATI GPU.

 

NVIDIA did not let it die, though; they knew they were behind ATI in terms of developing physics on the GPU, so they did the best thing they could. They bought AGEIA and incorporated that code into their GPUs. The results are pretty clear as we do see newer and better games hitting the shelves with PhysX support. It is true that some of the first game titles that supported this new feature were simply terrible, but as each new generation comes out we see PhysX added in; not so much for the "wow, that's cool" factor, but to actually enhance the game.

 

Related Tags

Further Reading: Read and find more Video Cards content at our Video Cards reviews, guides and articles index page.

Do you get our RSS feed? Get It!

Got an opinion on this content? Post a comment below!

Latest Tech News Posts

View More News Posts

Forum Activity

View More Forum Posts

Press Releases

View More Press Releases