GDDR5X debuted on NVIDIA's GeForce GTX 1080 earlier this year, and then we saw 12GB of GDDR5X powering the super-powerful Pascal-based Titan X. Now we have GDDR6 being prepped for a debut sometime in 2018.
GDDR6 will increase the bandwidth to over 14Gbps, up from the already generous 10Gbps offered by GDDR5X, and up greatly from the now-current bandwidth of new GDDR5-based cards at 8Gbps - before then, it was 7Gbps for GDDR5. GDDR6 is also more power efficient, with it being around 20% more efficient over GDDR5.
I'd expect to see GDDR6 in the cards for AMD and NVIDIA for 2018, with NVIDIA set to use GDDR5X on its flagship graphics cards into 2017 alongside HBM2 on the upcoming Volta architecture. AMD has Vega planned for the first half of 2017, which will utilize HBM2 memory - but only if HBM2 supply isn't ridiculously expensive at the time, and is available in high volume.
GDDR6 will most likely be used in the refreshes of Volta and then I'd wager we'll see Navi using it - but AMD has teased "next-gen" memory on their GPU roadmap, but hasn't elaborated on what "next-gen" memory they'll be using.
You might be too busy enjoying the Battlefield 1 open beta right now to realize AMD has released new Radeon Software Crimson Edition drivers that are ready for both Deus Ex: Mankind Divided and Battlefield 1. You can grab the new drivers right here.
AMD says it has also added new DX11 CrossFire profiles for both games, with the new 16.8.3 hotfix solving some of the issues with random blank or colored screens when gaming on Radeon RX 400 series cards.
NVIDIA's new 372.70 driver is out now, and with some major additions.
First up is optimizations for World of Warcraft: Legion, the newly launched Battlefield 1 beta (check your e-mail), Deus Ex: Mankind Divided, and Quantum Break (Steam version, due September 14 with DirectX 11 support).
Beyond that, you get Fast Sync for Maxwell GPUs and the Extended, Clone, and Surround multimonitor configurations, and various bug fixes, mostly notably one for the high deferred procedure call (DPC) latency experienced upon upgrading to the GTX 1080.
AMD has confirmed it will be launching its next generation Vega architecture in the first half of 2017, saying it will launch Vega-based graphics cards for the "enthusiast market" in 1H 2017. The last we heard, Vega-based graphics cards were launching in March 2017.
If we look at the Polaris announcement to retail launch, it was announced in December 2015 and released to market with the introduction of the Radeon RX 480 graphics card in the last days of June. If the same timeframe is used for Vega, we should expect an unveiling in December and a launch earlier than Polaris, and maybe sometime in March-April in order to make a bigger impact on the market - especially against NVIDIA's formidable Pascal-powered graphics cards.
AMD's next-gen Vega architecture will be an interesting upgrade over Polaris, which I've been hearing from industry sources will be a superior architecture in many ways. Vega will be using HBM2 technology, so we can expect much more VRAM than the HBM1-based Radeon R9 Fury X offered, which featured just 4GB of HBM1. We will probably see 8GB and 12GB models, but I'd like to see a higher-end Vega graphics card with 16GB of HBM2 - ok, AMD?
I reported on HBM3 a few days ago, but all of the details weren't clear - until the Hot Chips conference in Cupertino this week, where Samsung and SK Hynix shared some more details on the next leap in HBM technology.
HBM3 will offer improvements over HBM1 and HBM2 in nearly all areas, with HBM3 offering more RAM stacks. HBM3 will feature 8 or more stacks connected via through-silicon vias (TSVs), which is up from the 2/4/8 stacks on HBM2. The upgraded HBM3 tech will see individual memory dies of up to 16Gb, up from the 8Gb on HBM2, meaning 64GB of VRAM on next-gen graphics cards will become a reality.
Lower core voltage and twice the peak bandwidth will be offered on HBM3, which is another great thing to see, but HBM3 won't be arriving until sometime in 2019-2020.
NVIDIA revealed its Tesla P100 graphics card at its GPU Technology Conference earlier this year, the first Pascal-based graphics card, and the first HBM2-powered card from NVIDIA. It was a compute monster, and it was only today during the annual Hot Chips symposium that NVIDIA revealed their first die shot of the 610mm2 GPU die.
The company released the GP100 die shot as part of their presentation on Pascal and NVLink 1.0, but die shots have not frequent from both NVIDIA and AMD, so it's nice to see the GP100 die out in the wild. GP100 is NVIDIA's first part that features HBM and NVLink, which is an important time in the company's history, as this exciting technology isn't available for the consumer GeForce graphics cards... yet.
NVIDIA's new GP100 die shot teases the HBM2 interfaces at the top and bottom of the picture, with the 4096-bit memory bus able to transfer information at over 1TB/sec.
AMD is continuing its push against NVIDIA, securing itself more discrete GPU market share from NVIDIA according to the latest data from Mercury Research, which has AMD gaining throughout 2016.
Mercury Research's data shows that AMD has gained GPU market share for the fourth consecutive quarter, driven by strong GPU sales in late 2015 and throughout 2016. AMD has pulled itself up to 29.9% market share, which is a big deal considering this time last year AMD was sitting at around 18%. This is a big deal, as Mercury Research notes in their press release that this is the first time AMD has experienced an increase since Q1 2012.
Where did AMD's gains come from? According to the report: "The decline in low-end units shipped by NVIDIA resulted in substantial unit share gains for AMD in the desktop standalone segment, though by our estimates revenue share was unaffected due to NVIDIA's strong gaming mix improvement". So AMD is hitting the lower/mid-end markets while NVIDIA doesn't just dominate, it owns the high-end market right now.
The new data has AMD sitting pretty with 29.9% market share, while NVIDIA has 70.1% discrete GPU market share.
PCIe 3.0 has been a staple of motherboards and graphics cards for close to 6 years now, but the PCI Special Interest Group (PCI-SIG) have PCIe 4.0 nearly ready, and man is it going to be a huge launch.
The upgraded PCIe 4.0 specification will allow for double the bandwidth, from 8GT/s to 16GT/s but there are a bunch of other changes we should be more excited over. As it stands, PCIe 3.0 is capable of delivering 75W of power through the connector, with most graphics cards requiring additional PCIe power connectors to get up and running. Well, PCIe 4.0 will be the end of that.
PCIe 4.0 will provide a minimum of 300W, and possibly up to 500W, which is more than enough power for any graphics card on the market. Imagine a world with a new NVIDIA GeForce Titan X graphics card, or a new Radeon RX 480 without the need of PCIe power connectors. It would be a mess-free, clean-looking gaming PC - something that is simply impossible today because there's no way around delivering power to graphics cards without the PCIe power connectors.
HBM3 is being worked on by SK Hynix and Samsung and will offer up to 64GB VRAM at higher speeds than HBM2, but a low-cost version of HBM is also in the works, which will feature less bandwidth but a lower cost point than HBM1 and HBM2.
The new low-cost HBM will feature increased pin speeds, from the 2Gbps on HBM2 to around 3Gbps on the new low-cost HBM while the memory bandwidth shifts from 256GB/sec per DRAM stack, to around 200GB/sec per stack. This means the upcoming low-cost HBM could reach the mass market, so we could be looking at HBM-powered notebooks and consumer graphics cards, more so than just the three from AMD that we have now in the Radeon R9 Fury X, Radeon R9 Fury and R9 Nano graphics cards.
When the first wave of HBM arrived, we were blown away by its bandwidth (512GB/sec) but it was the form factor that really made me take a step back, allowing for super-fast graphics cards like the Radeon R9 Nano from AMD. Well, HBM2 is already here and used by NVIDIA on their Pascal-based Tesla P100 graphics card, but not in the consumer space... yet.
SK Hynix and Samsung are working on new HBM technologies, with HBM3 sitting at the top of the hill. HBM3 will offer twice the bandwidth, but it will feature a lower cost. Right now, HBM3 is known in multiple forms - SK Hynix refers to it as HBM3 or HBMx, while Samsung calls it xHBM or Extreme HBM. Either way, the next generation HBM technology is an improvement over both of its predecessors in HBM1 and HBM2.
HBM2 offers 256GB/sec of bandwidth per layer of DRAM (1024GB/sec total), while HBM3 doubles that to 512GB/sec (2GB/sec+) of memory bandwidth. Better yet, HBM3 should usher in higher-end graphics cards with 64GB of HBM3, which will just be incredible. I don't think we'll see HBM3 on consumer graphics cards anytime soon, but the low-cost HBM technology that is on the way will instead be used - that or GDDR5 and GDDR5X which still offer great performance.