TweakTown NewsRefine News by Category:
NVIDIA will show off its next-generation Pascal architecture at their GPU Technology Conference last week, where we should be introduced to the GeForce GTX 1080 and GTX 1070 video cards - with their purported cooler shrouds teased just a couple hours ago.
We could be surprised, and NVIDIA could unveil their new Pascal-powered Titan X successor - which is something I think we'll see. I think NVIDIA will unveil the GP100 GPU, rocking 16-32GB of HBM2 and a price of $1000-$2000 (I'd say $1499 for the 16GB HBM2 version and $1999 for the 32GB HBM2 card). NVIDIA could then drop the GTX 1080 for something like $599, rolling with GDDR5X - still providing a large increase in performance over the GTX 980 and GTX 980 Ti - with thanks given to the 16nm node and the new Pascal architecture.
NVIDIA's GPU Technology Conference is right around the corner (it's next week!) and just in time for the party are some images of the purported GeForce GTX 1080 and GTX 1070 with revised cooler shrouds.
If these images are real, we're looking at the first real photos of NVIDIA's Pascal-based video cards. These images could easily be faked or created for the purposes of getting us all excited, but if it is real - it looks very similar to the GTX Titan X and GTX 980 Ti coolers, except it looks more fierce. It has a military feel to it and reminds me of ASUS ROG hardware.
AMD has just released their new VR Ready Crimson 16.3.2 drivers, which are recommended for GCN cards - as well as the new dual-Fiji video card, the Radeon Pro Duo.
The new Crimson 16.3.2 drivers also provide support for VR headsets in the form of the just-released Oculus Rift and HTC Vive, with Radeon video card owners enjoying AMD VR features like LiquidVR SDK include Asynchronous Shaders, Affinity Multi-GPU and Quick Response Queue.
AMD is committed to VR, as they've stated they have "83% of all tethered VR experiences" powered by Radeon technology. The new Crimson 16.3.2 drivers support for the Oculus Rift SDK v1.3, the Radeon Pro Duo, and are completely optimized for the Rift.
You can grab the new Crimson 16.3.2 drivers right here.
NVIDIA's new 364.72 drivers are here, and represent a major release. Among the inclusions: support for the Oculus Rift and HTC Vive, optimizations for VR titles (Eve Valkyrie, Elite Dangerous, Chronos, Project Cars, and more), and optimizations for non-VR titles (Dark Souls 3, Killer Instinct, Paragon, and Quantum Break).
As you may know, many reports of stability issues have surfaced with NVIDIA's last couple of driver releases, so hopefully, that's overwith with this release; it's encouraging to see many significant stability improvements in the release notes, and to see mostly positive experience reports across the web.
Grab the drivers here or through GeForce Experience now.
AMD has made a huge deal about their Radeon video cards featuring support for Asynchronous Compute, with one of the standout games with Async Compute support being the recently released Hitman from IO Interactive.
During the Game Developers Conference earlier this month, Lead Render Programmer at IO Interactive Jonas Meyer held a discussion called Advanced Graphics Techniques Tutorial Day: Rendering 'Hitman' with DirectX 12. During the session, it was revealed that NVIDIA GeForce video cards had no benefits from Async Compute, with IO Interactive claiming to be working with NVIDIA to fix this.
But, even with their ACE (Asynchronous Compute Engine) on the GCN architecture, Radeon cards only see a 5-10% performance increase. Async Compute is used for SSAA (Screen Space Anti Aliasing), SSAO (Screen Space Ambient Occlusion) and the calculation of light tiles in Hitman. Ashes of the Singularity also makes use of Async Compute, too.
AMD laid out its GPU architecture roadmap through to 2019 at its huge Capsaicin event during the Game Developers Conference, but now we're hearing rumbles on its exciting new Vega GPU - due out in 2017.
Vega will reportedly rock a huge 4096 stream processors based on the Greenland GPU, with improvements in the way of the GCN 4.0 architecture, which are included in the IP v9.0 generation of graphics chips under development from AMD.
We should expect Vega 10 to be AMD's flagship product from the Greenland GPU era - rocking somewhere between 15-18 billion transistors, and the exciting new HBM2 technology which offers up to 1TB/sec of memory bandwidth. Vega looks like it'll be fighting against NVIDIA's compute-powerful GP100 (the Pascal-based successor to the GTX Titan X and GTX 980 Ti) - as Vega is the only HBM2-powered card on AMD's roadmap for 2017.
As we get closer to NVIDIA's GPU Technology Conference, it should come as no surprise that more leaks are arriving on AMD's next-gen Polaris architecture. This time, it's in the form of the Radeon R9 490 and R9 480 video cards.
According to the new leaks, AMD will slap 8GB of the new GDDR5 (and possibly GDDR5X) memory onto the R9 400 series cards, with a 256-bit memory bus. There'll also be 2304 fourth-generation GCN cores, which should be able to easily compete against the likes of NVIDIA's Pascal-based GeForce GTX 1070 and GTX 1080 cards (placeholder names, I really don't see NVIDIA calling them as the GTX 1000 series).
AMD's Polaris cards will be made on the 14nm FinFET process, but remember - these cards are the R9 490 and R9 480, not the R9 490X that should be a little beefier. The R9 490X could be different, where it will feature the faster GDDR5X, while the R9 490 might retain the GDDR5 standard.
AMD's Capsaicin event at GDC was quite the blast, if not for the reveal of their Radeon Pro Duo dual-GPU, VR-focused monster card, they also took time to show off just how potent Polaris 10 actually was. And someone was lucky enough to get a close-up of its behind. And it looks like any other GPU's derrière.
Looking closely, which is easy here, you can see that the connectors include three DisplayPort (presumably 1.3) one HDMI (likely 2.0) and a DVI-D port. And we get our first peek at the prototype PCB as well. Though it's only a prototype, and this could change. There it is. It is an engineering sample, so things could change in the future. The best part is the small form-factor it happens to be in. AMD is definitely committed to bringing more power to smaller form factors.
This is "Big Polaris", or Polaris 10, that's running the newest Hitman using DX12. It's stuffed in a Cooler Master Elite 110 case, meaning the board is minuscule to be able to fit into that case. The PCB is probably around the size of the R9 Nano and should consume less power than any Fiji based board to date.
The dream of higher-bandwidth it a slightly lower cost compared to HBM is coming closer to reality now that Micron has begun shipping samples of their GDDR5X chips to customers for inclusion in prototypes. This means that AMD and NVIDIA are now able to properly test the increase in bandwidth compared to normal GDDR5 and even HBM(2).
It looks like at the moment they're able to ship two different densities, 8Gb and 16Gb that can allow for VRAM of up to 16GB over a 256-bit wide memory bus. Each chip would be relegated to a single 32-bit channel. Don't fret, however, because even though it's a comparatively small memory bus, the internal changes to the structure still allow for far more bandwidth traveling over that bus. It's akin to increasing the speed limit, despite the lane being the same size. The result is that we could see up to 448Gbps of bandwidth, which is similar to first generation HBM, though without the restrictions on memory die size. Power-consumption, too, has been reduced slightly to offset any increase from higher clock speeds and more memory chips on the board.
As of right now it looks like both AMD and NVIDIA are interested in using GDDR5X in their next generation products. From the Capsaicin event, we learned that HBM2 will not be making an appearance until Vega even though the first generation HBM has been confirmed to be part of Polaris alongside traditional GDDR5 and GDDR5X memory. NVIDIA on the other hand will be making great use of Micron's faster tech by likely including 8GB of it in their upcoming GTX 1080, which should be revealed at GTC in April.
NVIDIA has just doubled down on its latest Quadro M6000 professional video card, with the new Quadro M6000 featuring a huge 24GB of GDDR5. The previous model had 12GB of VRAM.
As for what makes the Quadro M6000 tick, it is technically similar to the GTX Titan X, with a full GM200 core and 6 Graphics Processing Clusters. Each of the Graphics Processing Clusters features four SMM (Streaming Multiprocessing Units) with 12 cores in each SMM block. There's 3072 CUDA cores, 192 TMUs, 96 ROPs and 3MB of L2 cache, with six 64-bit memory controllers - with a 384-bit memory bus, and the GPU clocked at 988MHz.
The only difference between the older Quadro M6000 and the new model is that the new model features 24GB of GDDR5, up from the 12GB on the previous M6000. The card has a 225W TDP, with a single 8-pin PCIe power connector, DVI, HDMI and 3 x DisplayPort outputs.