Samsung was the first to unveil their new 16Gb GDDR6 chips that offer up to a blistering 18Gbps of bandwidth, with SK Hynix joining the fray with its announcement of 8Gbps GDDR6.
SK Hynix will differ from Samsung's upcoming GDDR6 with only up to 14Gbps of bandwidth, compared to the maximum 18Gbps on offer from Samsung. Still, 14Gbps offers up to a huge 768GB/sec of memory bandwidth given a 384-bit memory bus, which sounds suspiciously like what we'd find on a next-gen TITAN X from NVIDIA.
Samsung has announced it is officially producing 16-gigabit GDDR6 modules, something that ramps bandwidth up to an incredible 18Gbps, up from the 11Gbps on the GTX 1080 Ti and its 11GB of GDDR5X.
Samsung's new GDDR6 modules are made on their industry-leading 10nm process, with Jinman Han, senior vice president, Memory Product Planning & Application Engineering at Samsung Electronics explaining: "Beginning with this early production of the industry's first 16Gb GDDR6, we will offer a comprehensive graphics DRAM line-up, with the highest performance and densities, in a very timely manner".
He continued: "By introducing next-generation GDDR6 products, we will strengthen our presence in the gaming and graphics card markets and accommodate the growing need for advanced graphics memory in automotive and network systems".
Samsung will be making the new GDDR6 modules on 10nm with up to 16Gb density, up from the 20nm-based 8Gb GDDR5 chips, with GDDR6 cranking up up to 18Gbps of bandwidth, it has pin speeds of 72Gbps.
The company is using "innovative, low-power circuit design, the new GDDR6 operates at 1.35V to lower energy consumption approximately 35 percent over the widely used GDDR5 at 1.55V. The 10nm-class 16Gb GDDR6 also brings about a 30 percent manufacturing productivity gain compared to the 20nm 8Gb GDDR5".
We should expect one of the first graphics cards in the world to use GDDR6 to be NVIDIA's upcoming Ampere-based cards, which could be unveiled as soon as GTC 2018 in May.
There is another massive crypto mining boom that has rolled through courtesy of the massive spotlight being blasted onto cryptocurrency (with all the alts bleeding right now), and moreso into crypto mining.
Most crypto miners will look at the Radeon RX 570/RX 580 or GeForce GTX 1060/1070/1070 Ti, but with most of those being scooped up by miners, things aren't looking good. The prices of all GTX 10 series cards have risen, with the GTX 1070 Ti going for around $1000 on Amazon while the higher-end GTX 1080 Ti is $1050 at its cheapest, and up to $1600 for some of the custom GTX 1080 Ti cards from EVGA. Unbelievable.
This means you can get yourself NVIDIA's swanky TITAN Xp Star Wars COLLECTORS EDITION graphics cards for $1500, $100 cheaper than a custom GTX 1080 Ti that it most easily beats. Even for miners shelling out cards at this price, you're better off with the TITAN Xp which pushes 42MH/s mining Ethereum, compared to 35-38MH/s from the GTX 1080 Ti.
AMD isn't left unscathed from the massive GPU price hike, with prices of the Radeon RX Vega 64 skyrocketing to $1600 from XFX and SAPPHIRE's stock RX Vega 64 costs $2100. The lower-end RX 580s are going for above $600, while the GTX 1060 has reached the heights of $500 and above. This is crazy.
It also doesn't help that AMD couldn't begin to compete in the high-end GPU game, which has seen high-end GPUs only coming from NVIDIA and stock depleting quickly. AMD could've helped by having the Radeon RX Vega succeed and beat the GTX 1080 Ti, but now we're seeing the GTX 1080 Ti become ridiculously expensive as retailers are cashing in on the lack of GPUs.
After going on a Twitter blast a few days ago with VideoCardz where I said I heard early rumbles of Intel working on its own next-gen discrete GPU, rumors are now floating out that Intel is indeed working on their own GPU.
No GPU architect = delays. No money = who knows what will happen. Intel will make a new GPU with Radeon technology on their fab and they have the $$$ oh and Raja.— anthony256 [not] @ CES :D (@anthony256) January 12, 2018
Ashraf Essa from The Motley Fool is reporting that Intel is working on two new GPU solutions with codenames of Arctic Sound and Jupiter Sound, with the former being a 12th generation discrete GPU and the latter a 13th generation discrete GPU. Intel's Gen 9 solution is found on Kaby Lake while the Gen 9.5 solution is on Coffee Lake, while Gen 10 will come with Cannonlake soon.
We should expect them sometime after 2020, but with RTG boss Raja Koduri now on Intel's team, it makes sense. NVIDIA is fighting the good fight with their GeForce line up and virtually no competitor in the GTX 1080 and above space, but Intel could soon enter as a big competitor with their own Radeon-powered discrete GPU that does have me wondering: what's the future of Radeon under AMD if Intel can do better with Radeon GPU tech than AMD can.
NVIDIA is reportedly preparing some new GeForce GTX 1050/GTX 1050 Ti Max-Q products, which would really hit the mainstream gaming laptop market and crush everything AMD is hoping to do in the space with their recently-revealed Radeon RX Vega M GL product.
If you remember, Intel unveiled their new CPUs with Radeon RX Vega M technology inside, so NVIDIA shifting quickly with their GTX 1050 Ti Max-Q makes total sense. NotebookCheck have predicted GPU clock speeds of the purported GTX 1050 Ti Max-Q designs, where they think we'll see GPU boost clocks of 1417MHz on the GTX 1050 Ti Max-Q compared to the 1392MHz on the desktop GTX 1050 Ti, 1620MHz on the full GTX 1050 Ti mobile and 1455MHz of the GTX 1050 on desktop.
NVIDIA has one of the best single graphics cards on the market with the Tesla V100, a card that costs a whopping $8000 and isn't for gamers or even most people on the market. It's a card destined for workstations and servers, for AI and deep learning workloads - and strictly not for mining.
That however doesn't stop people from testing out these graphics cards for mining, with BuriedONE Cryptomining putting NVIDIA's Tesla V100 to work on various crypto mining adventures, with Ethereum mining hitting a 94MH/s. Considering that an overclocked TITAN Xp can achieve somewhere around 40-42MH/s and an overclocked Radeon RX Vega 64 can do anywhere between 38-42MH/s, this is a huge achievement for the Tesla V100.
NVIDIA's super-fast Tesla V100 rocks 16GB of HBM2 that has memory bandwidth of a truly next level 900GB/sec, up from the 547GB/sec available on the TITAN Xp, which costs $1200 in comparison. AMD trails behind with 483GB/sec of bandwidth with its 8GB of HBM2 on the Radeon RX Vega 64.
If you watch the entire video, they go through a bunch of different cryptocurrencies and show you how good the $8000 card is at mining various crypto.
As for BuriedONE Cryptomining, I'll be keeping a close eye on their Tesla V100 adventures as they're soon to be testing 8 x Tesla V100s... yeah, 8 of them. Insanity. I love it.
CES 2018 - AMD hasn't said much more about what's going on with the desktop Radeon GPU side of things since the departure of RTG boss Raja Koduri, but Scott Wasson (the owner and ex-journo of The Tech Report) has worked for AMD for over a year now and was at CES 2018 where he had some things to say about Radeon, and Vega.
During a video interview with PCGamesHardware.de, Wasson said that Vega 10 is capable of being used in a gaming notebook, and seemed to tease the news during CES. Wasson said: "I can't pre-announce products for our partners. It is possible to take a Vega 10 GPU and put it into a notebook. So we'll have to see".
If we take into consideration just how damn hot Radeon RX Vega 56 and 64 get in reference form, I can't see how in the hell laptop manufacturers will be able to use a full Vega 10 + 8GB HBM2 if they require 250-400W of power. Even if its heavily throttled, it would lose to a GTX 1060 or GTX 1070-powered gaming notebook.
Samsung has just announced its latest and greatest advancements in HBM2 technology at CES, something the company is calling "Aquabolt". This new HBM2 is much faster than the first spins of HBM2, where we're looking at bandwidth of an insane 2.4Gbps and in 8-Hi height (8GB) stacks which should see up to 32GB on next-gen graphics cards.
The 8-Hi stacks might sounds weird, but when there's 4 of them on a graphics card we're looking at 32GB of HBM2. As for bandwidth, we're looking at around 300MBps per pin, which on a 1024-bit memory bus should provide around 307GB/sec per package, times 4 bringing us to a crazy 1.2TB/sec of memory bandwidth.
Jaesoo Han, executive vice president, Memory Sales & Marketing team at Samsung Electronics explains: "With our production of the first 2.4Gbps 8GB HBM2, we are further strengthening our technology leadership and market competitiveness. We will continue to reinforce our command of the DRAM market by assuring a stable supply of HBM2 worldwide, in accordance with the timing of anticipated next-generation system launches by our customers".
With all of the hoopla surrounding the Spectre and Meltdown security bugs found in consumer CPUs, setting Intel pretty much on fire, how are the other companies fairing? Well, NVIDIA is doing fine.
NVIDIA's revised security bulletin has provided some insight, with NVIDIA CEO and co-founder Jen-Hsun Huang saying: "Our GPUs are immune, they're not affected by these security issues. What we did is we released driver updates to patch the CPU security vulnerability. We are patching the CPU vulnerability the same way that Amazon , the same way that SAP, the same way that Microsoft, etc are patching, because we have software as well".
Huang added: "I am absolutely certain that our GPU is not affected".
NVIDIA's own security bulletin also says: "We believe our GPU hardware is immune to the reported security issue. As for our driver software, we are providing updates to help mitigate the CPU security issue".
GALAX has surprised with a quick tease of its upcoming GeForce GTX 1070 Ti HOF graphics card, which will include its unique one-click button to increase the OC on the card.
GALAX will be deploying its chunky-but-cool triple-fan RGB cooling solution, with 8+8-pin PCIe power connectors (down from the 8+8+8-pin PCIe connector rig on the GTX 1080 Ti HOF variant).
The company even teased the card in SLI with Hyper Boost OC enabled, which is very impressive - and that SLI bridge, wow.