Video Cards & GPUs News - Page 303
GDC 2016 - AMD was all systems go at its Capsaicin event during the Game Developers Conference, unveiling its new dual-GPU video card, the Radeon Pro Duo. The company also talked about its massive commitment to VR, DirectX 12, its next-gen Polaris architecture, and more.
AMD was super confident during this event, where it had a fair amount of hands to play in its battle with NVIDIA. The laser-focused commitment to VR has me excited, as I believe that being a VR-focused company this early on, will only benefit Radeon Technologies Group, and AMD. The company has made partnerships with both Oculus and HTC, for the Rift and Vive, respectively. The company has gone all-in with VR to the point of having its own APU inside of a headset, partnering with Sulon for the Sulon Q headset.
With HDR-enabled TVs and video cards thanks to its next-gen Polaris card, the company had working 14nm at the show. The Radeon Pro Duo was on stage being used during the demonstration, requiring 3 x 8-pin PCIe power connectors to power the dual-GPU video card, rocking 8GB of HBM (4GB per Fiji GPU).
AMD has released its new Radeon Software Crimson Edition 16.3.1 hotfix drivers, which add Crossfire support to Need for Speed, and an updated CF profile for Hitman.
Not only that, but the 16.3.1 release also fixes issues with games running on Unreal Engine 4, as well as V-Sync no longer automatically being enabled when running DX12 applications. Frame rates are also no longer tied to the display's refresh rate with the 16.3.1 drivers.
Various other issues have also been resolved, including flicking issues for Crossfire users on The Division, and graphical corruption on characters death animations in Crossfire when running League of Legends.
With only weeks left until NVIDIA's GPU Technology Conference, where we'll be formally introduced to the Pascal architecture and everything that makes it tick. We've already been teased by the purported GeForce GTX 1080, GTX 1080 Ti and Titan X successor, but now we have some early performance numbers on the purported Pascal GPUs.
The performance numbers are coming from WCCFTech, who have spotted an unidentified NVIDIA video card with 7680MB (7.6GB) of RAM - 512MB less than 8GB, which should find itself onto the GP100-based GeForce GTX 1080 Ti (that's what we're going to call it for now, but I don't think NVIDIA will call the Pascal range by the GTX 1080 moniker).
Now, the performance numbers on 3DMark 11 were hitting 9038 - but an Intel Core i3-2100 processor was used - it lost out to various other video cards, but there were more details in the 3DMark results worth looking at. Firstly, the unidentified NVIDIA video card had 8GB of GDDR5 clocked at 8GHz - now this is noteworthy, as there are no video cards on the market with 8GB of RAM with performance close to the GTX 980 Ti.
As we get closer to NVIDIA's GPU Technology Conference in early April, we're finding out more details on the next-gen Pascal architecture, and what cards will purportedly arrive under the new 16nm process. Now remember, these are just leaked specs on the purported cards - the specs could change, and so could the naming system NVIDIA uses on the next-gen cards.
According to the latest rumors, NVIDIA will launch the new cards under what we'll call them for now (I seriously don't think they'll be called this): GeForce GTX 1080, GTX 1080 Ti and the new Titan X successor. Starting with the GTX 1080, which will feature the GP104 core, we'll see 4096 CUDA cores (a 100% increase over the 2048 CUDA cores on the GM204-based GTX 980).
We are to expect a near doubling in texture units, ROPs, memory bandwidth and 6GB of GDDR5 (up from 4GB on the GTX 980). The GeForce GTX 1080 Ti is even more powerful, with 5120 CUDA cores, 320 texture units, and 160ROPS - with another 28% in TFlops performance. The GTX 1080 Ti will also reportedly rock 8GB of GDDR5 (I think we'll see GDDR5X) and a 512-bit memory bus.
MSI has announced its new GeForce GTX 950 models won't require any additional PCIe power connectors, as they'll be powered by the PCIe port itself, consuming a maximum of 75W.
There are two new models from MSI: the MSI GeForce GTX 950 2GD5T OCV3 and the MSI GeForce GTX 950 2GD5T OCV2. Both feature the GM206 GPU, which rocks 768 stream processors, 48 texture units, 32 ROPS and a 128-bit memory interface with 2GB of GDDR5 RAM. Both of the cards have their GPUs clocked at 1076MHz, but have boost clocks of 1253MHz.
The difference between the cards is their cooling, and how long they are. The 2GD5T OCV3 model sports a dual fan design and a longer PCB that MSI wants to see installed into a traditional desktop PC, while the shorter 2GD5T OCV2 models uses a single fan, and is destined for mini-ITX machines.
AMD was once a very prolific force in the mobile, handheld GPU world. And it seems that Raja Koduri, the head honcho in charge of the new Radeon Technologies Group, isn't necessarily ruling out the idea of returning to that field either.
AMD is already well positioned to create custom chips with their deal that they've brokered with Sony and Microsoft, not to mention Nintendo. And Raja himself is open to the idea of licesning their IP to be used in mobile products. They don't, however, want to actively build their own mobile devices based on their technology, but if someone else approached them with the idea to integrate either an APU outright or Polaris into their own design, it wouldn't at all be out of the question.
The idea is a natural one, given the potential power savings that the new Polaris architecture could introduce on all fronts. Already Polaris 11 (the smallest chip thus far) can play Star Wars Battlefront at 1080P with a steady 60FPS while only consuming around 35 watts for the GPU itself. So, then, it isn't a stretch to have their new architecture appear in lower-power APU's for, say, tablets, micro-consoles or even phones.
GDC 2016 - AMD unveiled its dual-GPU yesterday at its Capsaicin event, but now the company has revealed benchmark numbers on 3DMark Fire Strike from the Radeon Pro Duo.
The Radeon Pro Duo destroys the Radeon R9 295X2, AMD's previous dual-GPU champion based on the Hawaii architecture, as well as NVIDIA's only current GeForce GTX Titan Z - based on its older Kepler architecture. Keep in mind that NVIDIA never released a dual-GPU based on its Maxwell architecture, but we could see a dual-GPU based on its next-gen Pascal architecture in early April at GTC.
Back to the benchmarks, the Radeon Pro Duo beats the Titan Z and R9 295X2 in all resolutions - 1080p, 1440p and 4K. On the Standard preset, the Radeon Pro Duo beats the Titan Z by 134%. On the Extreme preset it's 148% faster, and on the Ultra preset it beats the Titan Z by 152%.
GDC 2016 - AMD has been hinting at their dual-GPU for the better half of a year, but today is the day that we've been introduced to their dual-GPU: the Radeon Pro Duo. Up until now, we thought it would be called the Radeon R9 Fury X2, as well as the tease of Gemini, but AMD landed on the name of Radeon Pro Duo.
AMD's new Radeon Pro Duo has 16TFlops of compute performance, compared to the 8.6TFlops on the Radeon R9 Fury X. The Radeon Pro Duo in AMD's words is "guaranteeing the highest level VR experience for developers who love to game", and is "the platform of choice for Crytek's VR First initiative - enabling today's and tomorrow's generation of VR content developers with the most powerful hardware".
The Radeon Pro Duo is watercooled, just like its single-GPU cousin, and with the price set at $1499 it won't be for casual or even most enthusiasts - this is a card for the die-hard enthusiasts who need 120FPS+ at 2560x1440 or 60FPS+ at 4K. Past that, it's a card built for VR and its 90FPS+ requirement, and developers are going to snap this up for game development thanks to its beefy horsepower.
GDC 2016 - Stardock has some exciting stuff on its hands with Ashes of the Singularity, a DirectX 12-powered game that has an awesome benchmark for mixing AMD and NVIDIA video cards - but it's limited to just one game, for now.
Stardock has teased that it's working on a software solution, something that's part of DirectX 12, that will let gamers use multi-GPU support on DX12 games. Outside of AotS, multi-GPU support through DX12 isn't really a thing, but it could very well be in the near future.
Brad Wardell, Stardock CEO, explained to VentureBeat: "One of the biggest problems with games is that a new video card comes out from AMD and NVIDIA, and they're like [expensive], and you have to make a call. I like my video card. I can play most games on it, and I don't want to spend $800 on some new video card. But imagine, instead, hey, they're having a sale [using my GTX 760 as an example]. Hey, they're having a sale on an AMD 290 for $75. Wouldn't it be cool to put this into your computer and double your performance. You keep this in there [the 760]. You put this in there [the 290], and your games are twice as fast without doing anything else".
GDC 2016 - NVIDIA has just announced its new GameWorks SDK 3.1, with three new technologies included in the release. This includes new techniques for shadows and lighting, as well as two new physical simulation algorithms, released in beta form.
Senior VP of Content and Technology with NVIDIA, Tony Tamasi, explains: "It's our passion for gaming that drives us to tackle the technical problems presented by real-time rendering and simulation. Our GameWorks technologies push the boundaries of what's possible in real-time, enabling developers to ship their games with state of the art special effects and simulations".
As for the three new technologies with GameWorks SDK 3.1, this is what we can expect: