TweakTown NewsRefine News by Category:
Now that Rise of the Tomb Raider is out, people are finding out just how useless their second GPUs are - except now, someone has found a fix for SLI users with multiple NVIDIA GeForce video cards.
3D Center's "Blaire" found some things to play around with in the NVIDIA Inspector Tool for SLI, which enables awesome SLI results. In order to do it, you'll need to search for Rise of the Tomb Raider's profile, and then change the SLI bits (DX11) to 0x080002F5. After that, click the magnifier icon that will reveal NVIDIA's Undefined options, and search for 0x00A0694B and change it to 0x00000001.
After you've done this, you've enabled full SLI support for Rise of the Tomb Raider, with DSO Gaming reporting that they've noticed 95% scaling on their GeForce GTX 690 - a damn good result for SLI scaling.
AMD was all systems go at VRLA last week, but during the VRLA Winter Expo keynote, the company teased its dual-GPU... the Radeon R9 Fury X2.
AMD's Roy Taylor said that the Radeon R9 Fury X2 has around 12 TFLOPS of SP, compared to the Radeon R9 295X2 which has 11.5 TFLOPS of SP compute performance. The big difference between the Fiji-based R9 Fury X2 and the Hawaii-based R9 295X2 is that the Fury X2 uses only 375W of power, compared to the R9 295X2 which would chew 500W. This means that the Fury X2 is around 40% more power efficient than the R9 295X2.
During his speech, Taylor said: "Last time I was here I also promised you that we would make the world's most powerful small computer for developers. We promised you we would take two of our highest end GPUs and put it inside that tiny box and if you go downstairs we actually have a demonstration of a dual GPU, 12 TeraFlops, fastest GPU solution in the world, inside of Tiki. It's a feat of engineering we are delighted with".
PC port specialist Nixxes Software has posted a tech support guide to Rise of the Tomb Raider on the Steam forums, just a few hours ahead of the game's launch. In the process, they've made it known AMD will be releasing a 16.1.1 driver "shortly", which they (Nixxes) recommends installing for the game.
It's possible 16.1.1 will include optimizations specifically for the game as is usually the case for big launches; NVIDIA did just this yesterday.
Asked for comment, AMD said, "We're working towards a hotfix that will have improvements for TR, but it's a work in progress. We can't comment on the timing or the details, but we'll keep you posted!"
NVIDIA has released its 361.75 WHQL certified driver today. You'll want it if you're planning on playing Rise of the Tomb Raider or getting on Ubisoft's The Division beta, as it contains optimizations and SLI profiles for both. In the case of the former, both 2-way and 3-way SLI profiles are included.
Tomb Raider launches tomorrow, January 28; the beta starts this Friday, January 29 at 12 PM GMT.
Grab the driver at the source or via GeForce Experience.
NVIDIA has just launched its new GeForce GT 710 video card, a cheap $40 GPU that the company claims can deliver "up to 10x the performance of integrated graphics", and gaming up to "80% faster" than traditional iGPUs.
While the GeForce GT 710 gets beaten by Intel HD 530 integrated graphics found in Skylake chips, the card is an excellent option for budget gamers who have older rigs and want a cheap DirectX 12 card. As far as specs go, NVIDIA's inexpensive card uses DDR3 memory on a 64-bit bus with 14.4GB/s bandwidth, has a base clock of 954 MHz with 192 CUDA cores, and a memory clock speed of 1.8 Gbps.
The GeForce 710 supports a host of features including G-Sync adaptive sync, multi-monitor support for up to 3 displays, 3D Vision, NVIDIA's PhysX tech, and is OpenGL 4.5 and DirectX 12-ready out of the box. The card has a maximum resolution of 2560x1600 via HDMI and 2048x1536 on VGA, and features 1x Dual-Link DVI-D port, 1x HDMI and 1x VGA. It requires a minimum 300W power supply to function.
If you have been reading our GPU-related content, you should know that we are set for the biggest year in GPU history this year, from both sides: AMD and NVIDIA.
Well, at NVIDIA's GPU Technology Conference in April, we should see NVIDIA unveil the biggest GPU they've ever made - the successor to the GeForce GTX Titan X. The next-gen card could be called the GTX Titan X2, which would pull some of the wind out of AMD's sails with the dual-GPU Radeon R9 Fury X2, and we should see it featuring HBM2 - scaling up to 16-32GB with 1TB/sec of memory bandwidth. Insanity.
Back in September, we exclusively said that NVIDIA would release both a HBM2 and GDDR5X range of cards - something that will kick off with the HBM2-based Titan X successor. Towards June, we should see NVIDIA unveil a new GP104-based GeForce GTX 980 successor, based on GDDR5X - which offers 448GB/sec of memory bandwidth.
Then we have the elusive GeForce GTX 980 Ti successor, which will also be powered by HBM2. I think this card will most likely arrive sometime later in the year, depending on AMD's movement in the enthusiast GPU sector. The Titan X successor will be incredibly fast (I would say 1.5-1.8x the Titan X) and so will the GDDR5X-powered GTX 980 successor (again, most likely 1.5x the GTX 980). These two cards will fill the most important parts of the market, while keeping its secret weapon (the GTX 980 Ti successor) waiting in the darkness.
AMD has launched a new website to promote GPUOpen, its initiative intended to assist graphics developers with open source tools, effects, and more.
As the site explains, the point of the tools is to foster more efficient game development. To that end, they are readily downloadable from GitHub and can be shared at will among fellow developers.
A couple of the technologies featured are HIP, a C++ interface that lets you create portable applications to run on any GPU, and AMD's LiquidVR, a Direct3D 11 interface that can empower applications with various GPU features, including multi-GPU support. Effects on display include AMD's TressFX for fancy hair and fur, to name one of many.
The last time we physically saw the dual-GPU version of the Fury X was at the launch event itself in Sydney, Australia - where we had our hands-on that beautiful PCB. But, the Radeon R9 Fury X2 has shown up again, this time at VRLA.
The VRLA expo was an event for all things virtual reality, held in LA last week. During the event, some of the HTC Vive demos were powered using the Radeon R9 Fury X2. Thanks to Facebook, we noticed Antal Tungler, PR Manager for AMD and all-round cool guy, posted on his Twitter account. He said: "Prototype Tiki from @FalconNW powering #htcvive with dual Fiji @AMDRadeon at the #vrla".
Someone asked Tungler "When you say "Dual Fiji" do you mean 2x Fiji cards, or 2x Fiji GPUs on 1 card? ;)", to which he replied with "One card". So we know that it wasn't 2 x R9 Fury X cards in the machine, but a single, dual-GPU beast. But with Polaris around the corner, I have to ask the question: where does the R9 Fury X2 fit in? It would only have 4GB of HBM1 per GPU, which really isn't enough VRAM considering it will be $1000+. VR headsets are pushing 90FPS, and a high-resolution to boot. I guess we'll see in the coming months, maybe AMD will launch the Fury X2 in between now and the release of Polaris in June/July.
But, that's with DX11 games. When DX12 rolls around, we should begin to see the 2 x 4GB lots of HBM1 turn into 8GB of HBM. This will be a huge deal for AMD, as they'll be able to market it as "8GB for DX12", right in time for DX12 games and VR.
The final specification for GDDR5X, the successor to GDDR5, has been decided, and though it doesn't allow for quite as much bandwidth as HBM or HBM2, though it's a technology that's a lot easier to implement than the latter, with fewer modifications needed to the GPU design to use.
GDDR5X allows for up to 14Gbps of total bandwidth and because it's based so heavily on its predecessor, it's pin compatible though highly internally revised in order to facilitate actual advancements in memory speed and bandwidth without making something entirely new. How JEDEC and Micron have done this is by increasing the prefetch by double, mandating the use of Phase Locked Loops and Delay Locked Loops as well as being able to transmit data at a rate that's quadruple the actual clock speed. In other words, it's fast. For comparison, GDDR5X running at the top-end 14Gpbs could potentially provide 448GBps of full bandwidth, which isn't too far off of the memory bandwidth of the R9 Fury X.
Micron, one of the leading manufacturers working on GDDR5X, estimates around a 10% power consumption decrease at the same VRAM size. VRAM sizes of 4Gb up to 16Gb can be used with the new specification. The reason for coming out with this new specification is to further address every segment of the market, especially those where HBM2 might not be economical, despite AMD's efforts to implement HBM in all segments of their GPUs. Now all GPUs can enjoy a healthy bandwidth increase for very little, if any, cost increase.
AMD might be looking to lower the price of the vanilla R9 Fury, or so a rumor seems to suggest. And this would be shortly after they lowered the price of the Nano in response to customer demand and the market.
KitGuru says that one of their sources that just happens to be part of the retail chain seems to be privy to the knowledge that there is indeed a price-cut that's going to happen on the R9 Fury. They don't know what that cut actually is, but just that it's going to happen in the coming weeks. They even suspect the price-cut might include the Fury X as well.
This is great news because it's a great card that can provide a good experience at 1440P and even at 4K if the visual quality is turned down some. A price reduction would make more competitive against the card, the 980, that it actually competes against. If this holds true, it'll be a good move by AMD.