TweakTown NewsRefine News by Category:
The final specification for GDDR5X, the successor to GDDR5, has been decided, and though it doesn't allow for quite as much bandwidth as HBM or HBM2, though it's a technology that's a lot easier to implement than the latter, with fewer modifications needed to the GPU design to use.
GDDR5X allows for up to 14Gbps of total bandwidth and because it's based so heavily on its predecessor, it's pin compatible though highly internally revised in order to facilitate actual advancements in memory speed and bandwidth without making something entirely new. How JEDEC and Micron have done this is by increasing the prefetch by double, mandating the use of Phase Locked Loops and Delay Locked Loops as well as being able to transmit data at a rate that's quadruple the actual clock speed. In other words, it's fast. For comparison, GDDR5X running at the top-end 14Gpbs could potentially provide 448GBps of full bandwidth, which isn't too far off of the memory bandwidth of the R9 Fury X.
Micron, one of the leading manufacturers working on GDDR5X, estimates around a 10% power consumption decrease at the same VRAM size. VRAM sizes of 4Gb up to 16Gb can be used with the new specification. The reason for coming out with this new specification is to further address every segment of the market, especially those where HBM2 might not be economical, despite AMD's efforts to implement HBM in all segments of their GPUs. Now all GPUs can enjoy a healthy bandwidth increase for very little, if any, cost increase.
AMD might be looking to lower the price of the vanilla R9 Fury, or so a rumor seems to suggest. And this would be shortly after they lowered the price of the Nano in response to customer demand and the market.
KitGuru says that one of their sources that just happens to be part of the retail chain seems to be privy to the knowledge that there is indeed a price-cut that's going to happen on the R9 Fury. They don't know what that cut actually is, but just that it's going to happen in the coming weeks. They even suspect the price-cut might include the Fury X as well.
This is great news because it's a great card that can provide a good experience at 1440P and even at 4K if the visual quality is turned down some. A price reduction would make more competitive against the card, the 980, that it actually competes against. If this holds true, it'll be a good move by AMD.
Now that HBM2 is beginning to flow into the market, thanks to Samsung making 4GB HBM2-based DRAM, NVIDIA is getting confident with Pascal - with the latest rumor stating that the company will unveil its next-gen GPUs in the first half of this year, with availability to follow in 2H 2016.
We know this will happen, where my sources tease that both AMD and NVIDIA will have next-gen GPUs prepared for June/July, but I've got a feeling NVIDIA will introduce a next-gen enthusiast GPU at their GPU Technology Conference in early April. NVIDIA is reportedly already playing around with the 16nm-based Pascal GPUs internally, but we should expect GDDR5X- and HBM2-powered offerings, with a GeForce GTX Titan X successor to be unveiled at GTC 2016. We might see the new Titan X with 16GB of HBM2, and possibly a professional-grade Tesla/Quadro GPU with 32GB of HBM2 teased, too.
As for the GP100, I don't think NVIDIA will unveil the GeForce GTX 980 Ti successor just yet as the GTX 980 Ti is still one of the best video cards you can buy. We should see a Titan X successor unveiled - powered by HBM2, followed by a successor to the GTX 980 - powered by GDDR5X. The HBM2-powered offerings will be able to pack 32GB of HBM2, and offer up to 1TB/sec of memory bandwidth, up from the 334GB/sec on the GeForce GTX 980 Ti and its GDDR5.
GPU's are fantastic tools for completing scientific computational work. They're effective at calculating math that's highly parallel, doing it far faster than any CPU could do alone. And now the GPU is going open-source with Binghamton University's new Nyami architecture that researchers have developed.
Timothy Miller and his colleagues have finally been able to test their open-source GPU design, called Nyami. It's essentially a GPGPU focused design that's borrowed a lot from Intel's Larrabee (Xeon Phi) while still being, at it's heart, a GPU.
This is the first open-source modifiable and synthesizable GPU made by anyone. The architecture has a measure of modularity so that any aspiring researcher or scientist can modify it to their hearts content, provided they have the expertise to do so, though. But really this is revolutionary because now software and hardware can reach a nexus and be developed with the help of the open-hardware community, which is a well supported community. The problem has always been getting an architecture started, which is a highly technical engineering problem. But now the first part is solved, and we might see some great scientific progress that could even spill over to consumer GPU's one day. Just keep in mind this isn't something you'll be playing Assassin's Creed Redundancy on.
There's a rumor floating around the Internet that seems to suggest that NVIDIA is very close to releasing their top-end mobile GPU, the GeForce GTX 980MX as well as the little brother, the GTX 970MX. Keep in mind that these are strictly mobile parts, and not related to the full-fat GTX 980 that's being stuffed into laptops.
There's doesn't appear to be any actual source to confirm the imminent release, though they seem to be very adamant that NVIDIA is intent on releasing these high-end mobile parts soon. And these chips will be plenty fast and actually provide power efficiency that'll be necessary in thinner laptops.
The GTX 980MX is rumored to have 1664 CUDA cores, 104 texture units, 64 raster devices and a clock speed of up to 1048MHz on a 256-bit memory bus. This is slightly more CUDA cores than the slightly smaller part, the 980M. Oddly the TDP is only 25W less than the full-blow 980 laptop variant at 125W. That's still a lot of power, and you definitely wouldn't be gaming with a notebook powered by this monster without being tethered to the wall.
AMD's new Polaris chip might have just been caught on the Zauba import/export table, if we can truly believe what other sources have decoded while reading these manifests.
The price of per unit of this particualr "printed circuit board assembly for personal computer(video/ video card)" is such that it lines up with other AMD shipments in the past. It also indicate that the bigger Polaris die which we were able to see at CES, might be well into production. That's good news for us enthusiasts.
But then again, decoding the serial numbers and the entire manifest is very difficult, and even though some might claim that they know that these particular shipments are indeed for AMD and are a chip with a particular architecture, we don't actually know. But it is exciting that at least production appears to be marching on as you read this. Just don't forget the tablespoon of salt. And yeah, it's okay to be excited too. I know I am.
Whether you need to upgrade your outdated Radeon card or want to ensure your PC is ready for the incoming VR boom, we've found a duo of GPU deals that will fit the bill nicely. Today we have two different flavors of AMD's Radeon R9 390's discounted over at Newegg, both of which offer some impressive performances with 4K resolution support.
These sales are complimented by mail-in-rebates, which is par for the course for NewEgg sales. First up we have a PowerColor Radeon R9 390 for just $268 after a $20 mail-in-rebate (regular price $268), and the eggmen are tossing in a free $10 gift card to boot--but I don't think you can use the card for this purchase, only future purchases. The PowerColor R9 390 sports 1x HDMI, 1x Display Port and 2x DVI slots, and requires a 6-Pin / 8-Pin connectors with a 750W PSU.
Next up is an XFX Radeon R9 390 dropped down to $274 after a $30 MIR (original price $304). This card is similar to the PowerColor model, featuring a single HDMI and DisplayPort slot accompanied with two DVI ports. The power requirements are the same, with a minimum 750W PSU and the 6 and 8-pin PCIe power connectors.
During NVIDIA's GPU Technology Conference last year, NVIDIA unveiled its new NVLink interconnect that would find its BFF in their upcoming Pascal architecture.
At the time, we wrote that NVLink had 5x the bandwidth of PCIe 3.0, with NVLink opening up the possibilities for 8-way GPU setups - compared to the limit of 4-way SLI that we have now. Well, AMD is now talking about its upcoming next-gen coherent fabric, which will offer speeds of an insane 100GB/sec throughout multi-GPU setups. AMD has said that its new APUs will also be supported, with compute machines set to benefit greatly, too.
The big question for AMD is still in the air - but RTG boss Raja Koduri has said that he can't reveal if memory coherency and sharing between the GPUs and APUs will happen with the new interconnect. It would make sense to see it happen, but I'm sure AMD is rolling towards a big reveal in the near future with its Polaris architecture.
During the chat, Koduri said: "We have two versions of these FinFET GPUs. Both are extremely power efficient. This is Polaris 10 and that's Polaris 11. In terms of what we've done at the high level, it's our most revolutionary jump in performance so far. We've redesigned many blocks in our cores. We've redesigned the main processor, a new geometry processor, a completely new fourth-generation Graphics Core Next with a very high increase in performance. We have new multimedia cores, a new display engine".
He added: "In summary, it's fourth generation Graphics Core Next. HDMI 2.0. It supports all the new 4K displays and TVs coming out with just plug and play. It supports display core 4.3, the latest specification. It's very exciting 4K support. We can do HAVC encode and decode at 4K on this chip. It'll be great for game streaming at high resolution, which gamers absolutely love. It takes no cycles away from games. You can record gameplay and still have an awesome frame rate. It'll be available in mid-2016".
So what have we taken away from this? We now know about AMD calling their new GPUs by 'Polaris 10' and 'Polaris 11'. We knew about HDMI 2.0 arriving with the fourth-gen GCN core, which brings AMD's enthusiast cards up to where NVIDIA has been for around 18 months now. 2016 is going to be the most exciting year for GPUs, especially with HBM2 arriving with up to 32GB and 1TB/sec of memory bandwidth.
NVIDIA has dominated the Steam hardware survey for December 2015, with its GeForce GTX 970 topping the charts with 4.89% of gamers on Steam using the GM204-based video card.
Coming in second was Intel HD Graphics 4000 with 4.82%, and then three more NVIDIA cards in third, fourth and fifth position. The GeForce GTX 760, GTX 750 Ti and GTX 960 were third, fourth, and fifth place, respectively. AMD's Radeon HD 7900 series came in 9th position, with 2.05% of gamers using an HD 7900 series GPU.
It's an interesting position for NVIDIA to be in, considering the hoopla the company went through with the 3.5GB memory debacle last year. The GTX 970 comes with a memory partition that sees 3.5GB of its 4GB used at once, with performance-related issues once you surpass 3.5GB of used framebuffer.