Video Cards & GPUs News - Page 304
AMD was once a very prolific force in the mobile, handheld GPU world. And it seems that Raja Koduri, the head honcho in charge of the new Radeon Technologies Group, isn't necessarily ruling out the idea of returning to that field either.
AMD is already well positioned to create custom chips with their deal that they've brokered with Sony and Microsoft, not to mention Nintendo. And Raja himself is open to the idea of licesning their IP to be used in mobile products. They don't, however, want to actively build their own mobile devices based on their technology, but if someone else approached them with the idea to integrate either an APU outright or Polaris into their own design, it wouldn't at all be out of the question.
The idea is a natural one, given the potential power savings that the new Polaris architecture could introduce on all fronts. Already Polaris 11 (the smallest chip thus far) can play Star Wars Battlefront at 1080P with a steady 60FPS while only consuming around 35 watts for the GPU itself. So, then, it isn't a stretch to have their new architecture appear in lower-power APU's for, say, tablets, micro-consoles or even phones.
GDC 2016 - AMD unveiled its dual-GPU yesterday at its Capsaicin event, but now the company has revealed benchmark numbers on 3DMark Fire Strike from the Radeon Pro Duo.
The Radeon Pro Duo destroys the Radeon R9 295X2, AMD's previous dual-GPU champion based on the Hawaii architecture, as well as NVIDIA's only current GeForce GTX Titan Z - based on its older Kepler architecture. Keep in mind that NVIDIA never released a dual-GPU based on its Maxwell architecture, but we could see a dual-GPU based on its next-gen Pascal architecture in early April at GTC.
Back to the benchmarks, the Radeon Pro Duo beats the Titan Z and R9 295X2 in all resolutions - 1080p, 1440p and 4K. On the Standard preset, the Radeon Pro Duo beats the Titan Z by 134%. On the Extreme preset it's 148% faster, and on the Ultra preset it beats the Titan Z by 152%.
GDC 2016 - AMD has been hinting at their dual-GPU for the better half of a year, but today is the day that we've been introduced to their dual-GPU: the Radeon Pro Duo. Up until now, we thought it would be called the Radeon R9 Fury X2, as well as the tease of Gemini, but AMD landed on the name of Radeon Pro Duo.
AMD's new Radeon Pro Duo has 16TFlops of compute performance, compared to the 8.6TFlops on the Radeon R9 Fury X. The Radeon Pro Duo in AMD's words is "guaranteeing the highest level VR experience for developers who love to game", and is "the platform of choice for Crytek's VR First initiative - enabling today's and tomorrow's generation of VR content developers with the most powerful hardware".
The Radeon Pro Duo is watercooled, just like its single-GPU cousin, and with the price set at $1499 it won't be for casual or even most enthusiasts - this is a card for the die-hard enthusiasts who need 120FPS+ at 2560x1440 or 60FPS+ at 4K. Past that, it's a card built for VR and its 90FPS+ requirement, and developers are going to snap this up for game development thanks to its beefy horsepower.
GDC 2016 - Stardock has some exciting stuff on its hands with Ashes of the Singularity, a DirectX 12-powered game that has an awesome benchmark for mixing AMD and NVIDIA video cards - but it's limited to just one game, for now.
Stardock has teased that it's working on a software solution, something that's part of DirectX 12, that will let gamers use multi-GPU support on DX12 games. Outside of AotS, multi-GPU support through DX12 isn't really a thing, but it could very well be in the near future.
Brad Wardell, Stardock CEO, explained to VentureBeat: "One of the biggest problems with games is that a new video card comes out from AMD and NVIDIA, and they're like [expensive], and you have to make a call. I like my video card. I can play most games on it, and I don't want to spend $800 on some new video card. But imagine, instead, hey, they're having a sale [using my GTX 760 as an example]. Hey, they're having a sale on an AMD 290 for $75. Wouldn't it be cool to put this into your computer and double your performance. You keep this in there [the 760]. You put this in there [the 290], and your games are twice as fast without doing anything else".
GDC 2016 - NVIDIA has just announced its new GameWorks SDK 3.1, with three new technologies included in the release. This includes new techniques for shadows and lighting, as well as two new physical simulation algorithms, released in beta form.
Senior VP of Content and Technology with NVIDIA, Tony Tamasi, explains: "It's our passion for gaming that drives us to tackle the technical problems presented by real-time rendering and simulation. Our GameWorks technologies push the boundaries of what's possible in real-time, enabling developers to ship their games with state of the art special effects and simulations".
As for the three new technologies with GameWorks SDK 3.1, this is what we can expect:
GDC 2016 - Just as I'm getting ready to leave my Airbnb accommodation and pick up my GDC pass, and then later in the afternoon head to AMD's Capsaicin event - the company will reportedly be teasing its Polaris 10 GPU at the event, running on a SteamVR benchmark.
The company will be unveiling its new Radeon Pro Duo video card during the event, with it based on two Fiji GPUs offering 12TFlops of performance - making it the perfect card for VR and 4K gaming. AMD will be using the next-gen Polaris 10 GPU running on Valve's Aperture Science Robot Repair which will be powered by the HTC Vive Pre headset.
Polaris 10 looks like the same GPU showed off at CES 2016 and the RTG event in Sonoma in December, which will compete against the likes of GeForce GTX 950. The Polaris 10 will be an entry-level/mainstream part based on the 14nm FinFET process with GCN 4.0 enhancements. As for availability, it should launch in mid-2016 right around the time of Computex - so expect plenty of new entry-level/mainstream laptops to be powered by Polaris 10.
GALAX has just announced their latest member of the Hall of Fame lineup, with the new GeForce GTX 980 Ti HOF GOC. The new GTX 980 Ti HOF GOC competes against the likes of EVGA's GTX 980 Ti Kingpin which has 14+3 phase power, and against MSI's GTX 980 Ti Lightning with a 12+3+1 power design.
GALAX has provided 3 x 8-pin PCIe power connectors, which can consume a total of 525W of power - for the serious enthusiasts and overclockers out there. The card features a dual-slot design, with two fans to keep it cool. It features a beautiful white theme, with the card being longer than usual cards - so it can handle the two large 10cm fans that keep the GM200 core and VRMs nice and cool.
The heat sink contains five heat pipes that keep the card cool, which are hidden under a new HOF-branded cooler shroud. The GALAX GeForce GTX 980 Ti HOF GOC has a GPU clock of 1203MHz, with a Boost Clock of 1304MHz, with the 6GB of GDDR5 RAM not overclocked at all.
When the rumors first started flying about NVIDIA's next-gen video cards, I was one of the first to say that the mid-range cards would not be using the super-fast HBM2 VRAM, but they would use GDDR5 (and it was later revealed, GDDR5X was on its way). Well now we're here again, with rumors on NVIDIA's purported GeForce GTX 1080.
The GeForce GTX 1080 will be built from the GP104 GPU that NVIDIA should unveil at its GPU Technology Conference in April, where it should rock 8GB of GDDR5X. The new GDDR5X standard is capable of 14Gbps of bandwidth, compared to just 10Gbps from the current GDDR5 technology. We should expect the GeForce GTX 1080 to be unveiled next month, with a shipping date of somewhere in May/June.
ASUS has just unveiled their new GeForce GTX 980 Ti STRIX Gaming Ice video card, something that is water cooled courtesy of a huge water block from Bitspower.
The new GeForce GTX 980 Ti STRIX Gaming Ice from ASUS features the usual GM200 GPU with 2816 CUDA cores, 176 TMUs and 96 ROPs - the GPU is clocked at 1216MHz with a Boost Clock of 1317MHz. There are two profiles on the card, with the gaming profile clocking the GTX 980 TI STRIX Gaming Ice at 1190MHz with the Boost hitting 1291MHz.
When the ASUS GeForce GTX 980 TI STRIX Gaming Ice is in its gaming mode, it will use slightly less power, but in OC mode it will suck down everything it can from the huge 14-phase PWM design. The power consumed by the card is courtesy of 2 x 8-pin PCIe power connectors, which will let the card consume as much power as it requires in OC mode. We have the usual 6GB of GDDR5 RAM which is clocked at 7.2GHz compared to the stock 7Ghz on most other brands. This provides the card with 345.6GB/sec of memory bandwidth.
We all want to be able to game on our laptops, even if we don't necessarily admit it readily. Wouldn't it be nice to be able to fire up one of our favorite games for a quick spin when we're bored in a hotel room. It's possible, and gaming laptops exist that are both powerful and also not that massive, but they too are still limited in their abilities. And mobile GPU's aren't exactly the most powerful chips even if they can provide a good framerate. You can upgrade them, but an MXM module is far more expensive than a typical GPU. It's a problem that really wasn't being asked, but the solution to that might
AMD is introducing their XConnect technology that allows any laptop with Thunderbolt 3 to be able to have a discrete GPU connected to it. And there's a huge market for thin and light laptops out there because they're far more convenient to lug around. External graphics is actually a sound idea, too. When traveling, the external enclosure can be completely separate and safely ensconced in another piece of luggage completely, setup only when you're at your destination or when you really absolutely must have that extra GPU power, and AMD is the first to bring you this power. Plug-and-play GPU's are finally here, and they don't require a reboot anymore.
And they've done this by partnering with Razer and Thunderbolt in order to do this. Their innovations in allowing for graphics information to be passed via this interface isn't a closed-source method, either. They're staying true to their GPUOpen initiative and pushing their innovations out to be available to everyone. That means that yes, NVIDIA, can make use of their plug and play GPU technology.