AMD vs. NVIDIA - Are they even playing the same game?
In the GPU world there are many flavors, despite the perception that you either get AMD or NVIDIA. There is Matrox, WildCat/3D Labs, S3, VIA and a few more. But in the end what you hear the most about are NVIDIA and AMD. These two companies are compared back and forth in a game of favorites that would make a selfish 4 year old proud. Still, can you really compare them in equal terms?
On the surface you can. After all, you are measuring the number of frames per second the GPU (attached to a specific build of materials) can render. The magic number you are looking for is between 28 and 32 frames per second. This is the number of frames per second that will "fool" the human eye into thinking you are seeing full, fluid motion.
Now, there are ways to compensate for lower frame rates, but that is not what we are talking about here. No, what we are looking at today is more of a fundamental look at how each of these GPUs gets you to that magical number. This will not be a deep technical "white paper", but more of an everyday reference and a look at where each brand differs in implementation and execution.
- A Shader is a Shader
One of the things that both NVIDIA and AMD like to tell you about is the number of shaders they have. You will hear these numbers as an indication of how many parallel operations each GPU can handle. For the purposes of our discussion here we will talk about the current best single GPU card from each; the AMD Radeon 5870 and the NVIDIA GTX 285. AMD boasts a staggering 1600 shaders (AMD calls them Stream Processors), while the GTX 285 only has a comparatively small 240.
Easy to see who wins this game, right? - Well, you would be wrong. The reason is that AMD uses a cluster or node style of shaders called the Vec5D. In the 5870 there are 320 of these nodes; each has five shader units inside, making up the 1600 total that is listed. Four are lite and are capable of handling simple instructions and commands. The fifth is a fat and is capable of handling complex instructions easily. NVIDIA's shaders are multi-purpose and can handle either style of instruction. But wait, there is more. AMD's Stream Processors only operate at the core clock of the GPU. This means that the shaders are running at 825MHz compared to the 1475MHz on the GTX 285.
Now, what does all this have to do with your gaming? - Well, it comes into play when you consider how game engines are coded. If the game code is all small and simple instructions, then the AMD GPU has a very large upper hand, even considering the faster speed of the NV shaders. If the game code is in complex and bulky blocks then AMD only has 320 stream processors that can execute that code and then at a significantly slower speed. This problem has come to light more and more in the world of GPGPU computing, but is also starting to show up in gaming situations.
The issue is further complicated as game developers try to push more parts of the game through the massively parallel GPU. It is great that the GPU is able to handle this and that there are now tool kits and standards that will help developers, but again, it is one of implementation.Remember, NVIDIA has had their GPGPU code base (CUDA) for almost three years. They have moved their method of designing their GPUs to support this code (and have added in OpenCL support through recent driver updates).
AMD for their part has had the same GPU design going back to the 29xx series GPUs, but has not had the money or resources to invest into the gaming community like NVIDIA has. This has led to many games being more optimized for the way NVIDIA GPUs process code. Yes, there are games that AMD (and ATI before them) invested heavily in, but due to a lack of money these really are few and far between. Does this mean that NVIDIA is controlling the market unfairly? Maybe, but again, that is not the point of this article.
If you think back to the days of the X19xx series, ATI and AMD were talking about putting physics onto the GPU. At the time they claimed that by doing this they could show a greater increase in performance and remove the need for the fledgling PPU (Physics Processing Unit) put out by AGEIA. Unfortunately they never did anything with it. The thought died with that one single showing of Havok Physics running on an ATI GPU.
NVIDIA did not let it die, though; they knew they were behind ATI in terms of developing physics on the GPU, so they did the best thing they could. They bought AGEIA and incorporated that code into their GPUs. The results are pretty clear as we do see newer and better games hitting the shelves with PhysX support. It is true that some of the first game titles that supported this new feature were simply terrible, but as each new generation comes out we see PhysX added in; not so much for the "wow, that's cool" factor, but to actually enhance the game.
AMD vs. NVIDIA - Are they even playing the same game? - Continued
Where is AMD in all of this? - Well, again it goes back to the design of the GPU. Physics (especially PhysX) is bulky code. On an AMD GPU this bulky code is limited to the number of stream processors that can execute it. This would seriously slow any game down, making it difficult to implement on AMD GPUs. Not impossible, just difficult. OpenCL and AMD's work with developers like Bullet will help to bring a PhysX alternative to the market, but it will be hard to remove the established code base, especially when NVIDIA is willing (and able) to invest money and resources in helping game developers code their new titles.
However, the game is slowly changing; AMD just received a nice 1.4 Billion dollar shot in the arm that helped them pay off around half of their debt. This means they should be able to put more cash into development and partnerships like ATI did in the old days. Once they can get back in the community you may see more games optimized for AMD GPUs over NVIDIA's. The transition to smaller code blocks should not be too hard, but it will take time to get there. I would not expect to see physics on AMD GPUs for at least another year and by then we can be sure that NVIDIA will have an answer waiting to play out, too.
- A Final Thought
This article was not meant as a detailed explanation of NVIDIA and AMD architectures. I did not go in depth into the way each implements memory or other functions of the GPU (such as image quality). It was meant more so to show that although both are GPUs, there are large enough differences at their core to make it very simple to understand how a game can play well on one GPU and poorly on another, even without cheats.
As far as which direction is the best one, well, my personal opinion is that having multi-purpose cores are better than limiting the number and type of instructions you can use with the available shaders you have.
At the same time, I also feel that AMD/ATI has always had an upper hand in terms of overall image quality. I just feel that lately their execution on ideas and new directions has been abysmally poor and is something they need to work on. They have some great minds and there is no doubt that the new 5xxx series is an incredible product, but unless they are willing to work directly with the developers on a closer basis, they will continue to find themselves behind the power curve in many cases.
PRICING: You can find products similar to this one for sale below.
United States: Find other tech and computer products like this over at Amazon.com
United Kingdom: Find other tech and computer products like this over at Amazon.co.uk
Australia: Find other tech and computer products like this over at Amazon.com.au
Canada: Find other tech and computer products like this over at Amazon.ca
Deutschland: Finde andere Technik- und Computerprodukte wie dieses auf Amazon.de