Well! This is turning into quite the fight between the GPU titans, NVIDIA and AMD.
NVIDIA have had information slowly leaking out about their upcoming GF110-based GPU, the GeForce GTX580 for a few weeks now - but the specs and information are starting to get a bit more precise, instead of guesstimates.
For nearly 6 months now, the GTX480 has been the single fastest [single] GPU available - yes the Radeon 5970 is faster, but it's a dual GPU design. AMD had the market for Q4 2009 and Q1 2010, but NVIDIA have at least been able to claim that victory amongst the sour grapes of the card (power, heat, noise, delays, etc).
AMD have been hoping to counter that with the 6970 - but NVIDIA is looking to be right there at the same time with a competing GPU.
The GTX580 builds on an optimized circuit called the GF110 (with the 480 being the GF100). NVIDIA has changed it's target with the Fermi architecture from launch - it was first targetted toward servers and the like, but the bloat associated with the GPGPU parts on the GPU caused it to draw more power, which in turn creates more heat which in turn requires a faster spinning fan to keep quiet - it also takes up valuable space on the 40nm GPU that could be used to make the card faster in games.
This is where the change begins, NVIDIA are now going back to gaming. The GTX580 will make full use of all 512 CUDA cores - by activating the 16 streaming multiprocessor clusters (SM units) - which was cut from the 480. By activating all 16 SM units, the GTX580 will also get access to more texture units and ROPs.
The 480 has 3 billion transistors - the target toward gaming can lead NVIDIA to cut as many as 300 million transistors from the GPU - it will not be as efficient in GPGPU applications, but there are only a few users (compared to the many gamers and benchers) who care for that.
Now, onto more important things - specs, here is a chart of what we can expect:
We can see increased frequencies, obviously the mentioned CUDA cores, it's stronger in bandwidth and still retains it's 384bit memory bus. This is made possible through a more matured circuit manufacturing - which has allowed NVIDIA to lower power consumption of the GF100 by 10 - 15% without any physical changes.
It's looking to be some competition to AMD, but are simple changes enough? Will it still require SLI for Surround Vision? Will it still draw more power than the stated power consumption numbers, ala GTX480? Whatever happens - we are ALL in for the GPU fight of our lives, the end users wins in this fight always.
The best bit? It's due before the end of November - as is the 69x0 range from AMD.