The hot rumor to start the week kicks off with NVIDIA's upcoming next-gen Volta GPU architecture, which started off on the Beyond3D forums that NVIDIA will be using a custom 12nm process from TSMC. Remember this is all speculation and rumors, but we should have better clarification at GTC 2017 in May from NVIDIA direct.
NVIDIA shifting to 12nm is an interesting move, as the 10nm node is for SoC node - which is why Qualcomm is making its new Snapdragon 835 processor on the 10nm node, while 7nm isn't far away. The shift from to 12nm provides NVIDIA with more performance over the 16nm process, which is always welcomed.
GeForce GTX 30 Series - What To Expect
What can we expect from NVIDIA's next-gen GTX 30 series cards? Well, we should see three different Volta GPUs released in 2018 - the GV104, GV102, and GV110. NVIDIA will use the GV104 in its slightly higher than mainstream segment at $400 or so, where we should see the GeForce GTX 3070 and GTX 3080 powered by the GV104 core, with up to 16GB of GDDR6 RAM on a 256-bit memory bus.
As for the higher-end GV102-based product, we could expect up to 32GB of GDDR6 or HBM2 on these cards at the full 16Gbps bandwidth providing 512GB/sec of memory bandwidth. This could scale much higher, especially if HBM2 is thrown into the mix, we could see up to 1TB/sec (1024GB/sec) of memory bandwidth.
The GV102 core will be the Titan X successor, which is why I have predicted we will see up to 32GB of RAM being offered. Sure, it sounds crazy now - but Volta is a 2018 product, and GPUs are going to be getting much more complicated, and much more technologically advanced in the next 12-24 months.
I still remember when AMD launched the Radeon Pro Duo nearly a year ago now, but it was aimed at the prosumer market, and priced rightfully so. Until now.
AMD and its various AIB partners have dropped the price on the Radeon Pro Duo from nearly $1500, to just $799 - not bad considering it's the fastest graphics card from Radeon Technologies Group. The Radeon Pro Duo rocks dual Fiji GPUs with 4GB of HBM1 each, for a total of 8GB of HBM1.
Each of the Fiji GPUs has 4096 stream processors (8196 total) - making the Radeon Pro Duo feature the highest GPU cores of any graphics card, ever. We have 64 ROPs and 256 TMUs per GPU, with each Fiji GPU running at 1000MHz.
AMD created the Radeon Pro Duo for 4K gaming and professional content creation, with 16 TFLOPs of 32-bit single precision performance, the highest of any graphics card on the market right now. Better yet, the Radeon Pro Duo is the first workstation graphics card to feature HBM1 technology, with the 4GB of HBM1 clocked at 500MHz, providing 512GB/sec bandwidth thanks to tis 4096-bit memory bus.
If applications are designed to support the dual GPUs, you can utilize the 512GB/sec from each set of HBM1 for a total of 1TB/sec for maximum performance.
GIGABYTE kicked some serious graphics card ass in 2016 with their solid Xtreme Gaming series, but the company is shifting gears into the Aorus brand.
The first Aorus-based graphics card will be the GeForce GTX 1080 Xtreme Edition, slightly modified from the Xtreme Gaming series card. The Aorus GeForce GTX 1080 Xtreme Edition features clocks of 1784/1936MHz for base and boost, respectively.
The usual 8GB of GDDR5X is here at the standard 10GHz, with 2 x 8-pin PCIe power connectors that will ensure the Aorus GTX 1080 Xtreme Edition will smash past 2GHz on a GPU overclock. But now we'll talk about how the Aorus version is different to the Xtreme Gaming version.
Now that I'm back home and at my desktop, I'm sinking my teeth into everything GPU related that has happened over the last couple of weeks. First, we've had the official unveiling of the Vega GPU architecture that will be launching in May. Second, leaked decks of AMD's upcoming GPUs have appeared online, teasing Vega 10 and its dual-GPU brother, Vega 20 - and even Navi 10, oh and the dual-GPU version of Navi 10.
If you thought Vega 10 was going to be it, you'll have to wait for the dual-GPU version expected in late-2017 with 1.5x the performance. Right now, Vega 10 will consume around 225W according to the leaked slides and my industry sources. We might see this reduced if AMD keeps its reference card to under 200W, allowing AIB partner cards to clock the hell out of the Vega 10 GPU and leap up to 250-300W.
The dual-Vega 10 graphics card will most likely arrive as a reference only card from AMD, featuring a 300W TDP and dual Vega 10 GPUs. The clocks will be reduced as all dual-GPU cards experience, so we should see it drop from 1465MHz on the estimated GPU clocks of Vega 10, to 1000-1200MHz. This will let the card scale well, to around 1.5x the performance of a single Vega 10 graphics card, while hitting the 300W ceiling on power consumption. I'm expecting a heftier cooler, and 16GB of HBM2 on the dual Vega 10 graphics card.
AMD is set to launch its next-gen Vega graphics cards in May, with an event just before Computex - just as the Polaris-based Radeon RX 400 series cards were unveiled in Macau, just before Computex 2016. If you haven't seen my '5 reasons why AMD's next-gen Vega is going to kick ass' article, you should - and then come back here to enjoy the rest.
We're hearing that AMD will launch its enthusiast SKU of the Vega 10-based GPU, powered by 8GB of HBM2 memory. 8GB might not sound like much, but if you want to read how it will be impressive - I have teased a little about High Bandwidth Cache (HBC) - which is a large part of how AMD is going to dominate the GPU game in 2017, and beyond.
As for pricing, I think we're going to be looking at around $799-$899 for the Vega 10 graphics card with 8GB of HBM2, which should slot right next to my expected $899 pricing on the GTX 1080 Ti from NVIDIA. We should expect reference cards from AMD that are deliciously sexy, and very powerful - offering 4K 60FPS performance from a single GPU.
Vega 10 Specifications
This is where AMD should fight against NVIDIA in the GTX 1070, GTX 1080 and upcoming GTX 1080 Ti.
GPU: 14nm Vega 10 (64 NCUs)
Performance: 12 TFLOPs of single precision performance (750 GFLOPS of double precision)
GPU clock speeds: 1465MHz on reference, 1600MHz+ on AIB partner cards
RAM: 8-16GB of HBM2 (512GB/sec bandwidth)
PCIe: PCIe 3.0 x16
Release: May/June 2016
CES 2017 - Last month when AMD unveiled their ambitious new Vega plan, I had to decompress half of it over weeks - because there's just so much. During CES, I had some one-on-one time with various RTG members, and began thinking of the future of Radeon.
I plan a series of articles on Vega, but we need to split them into a few different parts - where I want to talk more about High Bandwidth Cache, and what it could mean not just for Vega, but for the future of gaming.
More Isn't Necessarily Better
We have been stuck in this world of 'more is better' and that the larger the amount of VRAM on a graphics card, the better it is. I know that's not how it works, but after 10 years in retail IT sales selling graphics cards, custom gaming PCs and everything in between - most consumers think 'higher numbers = better/faster'.
It has been ingrained into us from a very young age, the moreMHz a CPU had, the faster it was. The more GPU cores a graphics card has, the faster it is. The more VRAM a graphics card is, the more it can handle - but VRAM is a tricky thing.
CES 2017 - AMD hasn't talked about its upcoming Radeon RX 500 series graphics cards just yet, but that doesn't mean its partners aren't putting the new GPUs in their products yet - like Samsung, during CES.
Samsung was teasing its Odyssey series of notebooks at CES, with options of the NVIDIA GeForce GTX 1070 and AMD Radeon RX 570. There is an option for the RX 570 with 4GB or 8GB and follows up with Lenovo's recent Y520 featuring the Radeon RX 500 series mobility graphics.
It looks the Radeon RX 500 series is a rebranding of the Radeon RX 400 series cards, with someone visiting AMD's booth at CES and writing on Reddit: "The card IS an OEM version of the 470. I asked an AMD rep at their booth about it. The 500 series is an OEM rebadge of the 400 series, like how the 8000 series was to the 7000 series".
Some expected it during CES, some expected it before - but AMD stole the GPU show at CES with its big Vega unveiling event - but the GeForce GTX 1080 Ti was nowhere to be seen.
The current rumor has NVIDIA revealing the GeForce GTX 1080 Ti on March 10, at PAX East. I've had my own sources say something similar, with "it's coming very soon". The source of the PAX East rumor is an NVIDIA AIB partner employee, and with a release of the GTX 1080 Ti at PAX East, it could be a big deal with an event for gamers, about gaming.
You should be able to buy GTX 1080 Ti graphics cards from all major AIB partners at launch, so don't worry about it being a limited launch with only Founders Edition cards. NVIDIA knows the GTX 1080 Ti is going to be a popular card no matter the cost, and gamers will be throwing down their hard earned cash instantly.
Are you pumped? Have you been waiting it out for the GTX 1080 Ti?
CES 2017 - EVGA went through some issues late last year with their GeForce GTX 1080 FTW graphics cards resulting in thermal issues directly after launch, but they were quick to release BIOS updates and new thermal pads to fix the problem - but now that's a thing of the past.
EVGA has unveiled their new GeForce GTX 1080 FTW2 graphics card at CES 2017, which should rock a new custom PCB, with a new FTW2 edition graphics card.
It doesn't look much different to the original FTW cards, but with much-improved QA, I'm guessing.
CES 2017 - Vega 10 isn't even here yet, but we've seen engineering samples with 8GB of HBM2 on the 14nm process - but now we're hearing about Vega 20, which is due out in 'second half 2018'.
Vega 20 will reportedly rock 16-32GB of HBM2 with up to 1TB/sec of memory bandwidth, it'll be on the not-even-here-yet 7nm process, and has 'xGMI support for peer-to-peer GPU communication'.
Not only that, but Vega 20 will feature PCIe 4.0 x16 (up from the PCIe 3.0 we've come to know and love) and it'll consume between 150W and 300W of power. We don't know what to expect performance wise, but I'm expecting Vega 20 to kick some serious ass.