TweakTown NewsRefine News by Category:
According to "AMD_Chris" on various forums, AMD is working on an impressive new feature dubbed "Dynamic Frame Rate Control". DFRC would allow gamers to put a lock on the total frame rate their video card can render, which can result in a huge amount of power savings.
The feature would most likely see AMD variably adjusting the clock speeds of the cards in order to hit the desired frame rate, such as 60FPS. It might sound like V-Sync, but it's nothing like it as DFRC stops your GPU from cranking things up internally to render 100FPS, when you're only receiving as much as your monitor can put out, which is 60Hz, or 60FPS most of the time.
DFRC will underclock your GPU once you hit 60FPS (or whatever frame rate you choose), allowing the card to not pull as much power from the wall. AMD_Chris says that "the power savings were mind blowing" and we would agree, if your card is rendering 120FPS+ in a more basic game and you've got DFRC set to 60FPS, the power savings would be fairly large. We can't wait to test this new feature, that's for sure - what about you?
According to the latest rumors, the upcoming NVIDIA GeForce GTX Titan II has been benchmarked against the Radeon R9 380X and R9 390X. These are just rumors right now, as all three of these cards have not been announced yet. Even the name Titan II could change to Titan II, or Titan Extreme. AMD on the other hand could change their cards to 399X, or 400X or anything out of the norm from what to expect.
The details on the cards from the poster on Chiphell state that the Fiji XT and Bermuda XT (380X and 390X respectively), will be manufactured on Globalfoundries' 20nm process. The Titan II on the other hand will arrive as the GM200, which will supposedly come in two flavors: a cut down version with 21 MM units and a total of 2688 CUDA cores. A "full fat" version will arrive at a later date with an increased amount of SMMs and CUDA cores.
When it comes to performance, the GM200-powered GPU from NVIDIA will be 34% faster than the GeForce GTX 980. On the AMD side of things, the R9 390X will be a huge 65% faster than the R9 290X, which will feature liquid cooling that was rumored all those months ago. Performance wise, the next-gen GPUs are packing a massive punch.
GIGABYTE surprised the world with its super impressive GeForce GTX 980 WaterForce 3-Way SLI Kit, but now it is finally beginning to sell the best Christmas present ever. The kit is listed on Newegg, with a price tag of $2999.
For $3000, you get three NVIDIA GeForce GTX 980 GPUs, and an external liquid cooling box (with three 120mm radiators and fans). Breaking it down, the GPUs are around $550 each or $1650 for three. This means that the external liquid cooling box and all of the associated work GIGABYTE has pumped into this product, is costing a $1350 premium. This is a massive ask, but for some people who want the ultimate in 3-way SLI, it's a nice option.
The latest rumors for the mid-range GPU market have NVIDIA announcing their GeForce GTX 960 video card at CES 2015, which is just a couple of weeks away.
NVIDIA's GeForce GTX 960 would reportedly feature the GM206 core, 4GB of GDDR5 RAM on a 256-bit memory bus, a 944MHz Core and 6GHz Memory clock. Not much else is known on this mid-range GPU, but if NVIDIA do launch it, and launch it with a super competitive price of $199 or $249, this could really be the GPU to buy early next year for most gamers.
We've just reported that AMD has launched its new Catalyst Omega driver suite, where we've written up an overview on the slew of new features and technologies it has unleashed, and a look at performance at 1080p, but what about Eyefinity?
AMD increased the abilities of Eyefinity with the new drivers, which now allow 24 monitors to be supported on Windows. You don't need any third-party software, or hardware, with the company providing a new GUI for setting up your new 24 monitor Eyefinity rig. You'll need four Radeon GPUs with six DisplayPort outputs per card, but it can be done.
TSMC is holding its annual supply chain forum sometime later today, where it's expected to unveil its new work on 16nm and 10nm node technologies. DigiTimes reports: "TSMC's 16nm FinFET process has passed full reliability qualification, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015, the company announced previously. TSMC expects to move the node to volume production around July 2015".
NVIDIA is part of this meeting, but its competitor AMD, is not. This is an interesting part of the story, as AMD looks like it could coming out swinging with its Radeon R9 390X, but what process will it be made on, and by who? With NVIDIA looking to shift its already very efficient second-generation Maxwell architecture on the 28nm node, over the 20nm node and directly to the even smaller 16nm process, we could see quite a large jump in what NVIDIA could offer with what could eventually be known as the GeForce GTX 1080.
I personally think NVIDIA could rename, rebrand, or change things up with the next GPU launch. Moving its powerful, but energy efficient Maxwell architecture to a smaller process, could result in a massive change for NVIDIA. If we're already seeing such an improvement from the GeForce GTX 780 Ti to the GeForce GTX 980 on the same process, what can NVIDIA do when they're working with 16nm? Would it simply arrive as the GeForce GTX 980 Ti? Or would NVIDIA have enough power to really ramp things up, providing maybe an entire generational jump in performance, eclipsing anything AMD can do, or retaliate with, and change things up with a rebrand, or a new number scheme of GeForce GPUs?
ASUS has been doing some good things with its smaller video cards, but the latest one could be one of the hot-sellers during the holiday season. The new ASUS GeForce GTX 970 DirectCU Mini is based off of NVIDIA's second-generation Maxwell architecture, with the GTX 970 at the center of it.
On top of that, we have a 17cm long video card with the DirectCU II cooling from ASUS, using a single fan. It uses hot plate technology to improve thermal efficiency, which provides 20% more cooling than traditional, reference coolers. We also have Core Clocks of 1088MHz, while Boost Clocks are set at 1228MHz. We also have 4GB of RAM at its stock frequency of 7010MHz.
Connectivity wise we have one dual-link DVI port, a DVI-I port, HDMI 2.0, and DisplayPort 1.2. The ASUS GeForce GTX 970 DirectCU Mini requires just a single 8-pin PCIe power connector, which will be a perfect fit to Mini-ITX gaming systems. When it comes to pricing, we should expect similar pricing to GIGABYTE's GeForce GTX 970 ITX, which is on Amazon right now for $339.
AMD's Radeon R9 295X2 is still a champion of a GPU, even up against the likes of the single-GPU card from NVIDIA in the form of the GeForce GTX 980. Better yet, the R9 295X2 can now be found on Amazon for as low as $679, down from its introductory price of $1499.
The XFX Radeon R9 295X2 is currently $679 on Amazon, down from $1039 - a saving of $360. This is not too bad at all, especially when you consider you're getting two GPUs here. This card is still a great card for high-res gaming, especially 4K and beyond, and perfect for smaller systems where you can only install the single card.
Some unofficial benchmarks have surfaced on the alleged AMD Radeon R9 300 series GPU, something that is arriving as "Captain Jack" from the Pirate Islands family of products.
The GPU was benchmarked against a slew of other cards, on a bunch of different games. These games were all benchmarked, which provided a heavily-stacked average FPS, versing this mystery card against a bunch of other cards on the market. You can see in the image above, that the "Captain Jack" sample kicks some serious ass with 65.6FPS average, against the 50.1FPS average of the R9 290X and the 56.6FPS average of NVIDIA's GeForce GTX 980.
But these days, it's not all about brute performance, but performance-per-watt. The "Captain Jack" sample GPU is doing quite well here too, with load power consumption of an average of 197W. This is compared to the average power consumption of the GeForce GTX 980 pulling 185W, while the R9 290X pulls much more, at 279W. Whatever this card is, if it turns out to be true, AMD could really stomp back into the market with a card that goes toe-to-toe with the GeForce GTX 980, and some.
We already know that the Radeon R9 390X is going to be a special card, where according to rumors we can expect it to arrive with a 4096-bit memory bus, made possible with High Bandwidth Memory, or HBM. This card should have memory bandwidth of a truly next-gen 640GB/sec. But, what comes after that, is going to be even better.
The first generation of HBM, which SK Hynix is now shipping, provides 2Gb per DRAM die, 1Gbps speed per pin, 128GB/sec of bandwidth and is stackable in groups of four. This results in 4 x 128GB/sec = 512GB/sec, with another 128GB/sec coming from somewhere else on the card. But, the second-generation HBM is going to be lightning quick, even compared to the already-damn-impressive first-gen HBM tech.
The second-gen HBM technology will allow for 8Gb per DRAM die, up from 2Gb on the first-gen HBM for starts. This means we should see GPUs with 8-16GB of VRAM on-board. Second, the bandwidth increases to 256GB/sec (up from 128GB/sec) which should arrive as around 1.28TB/sec of available memory bandwidth. Considering NVIDIA's second-generation Maxwell-based GeForce GTX 980s only have 224GB/sec of memory bandwidth on a 256-bit bus, the 4096-bit wide memory bus with 1.28TB/sec of bandwidth will surely do some insane things, especially at Ultra HD and beyond resolutions.
This is where it will get interesting; over 1TB/sec of memory bandwidth on a 4096-bit bus is going to be an amazing sight to behold, especially at 4K or 8K. Even more so on something like VR with the Oculus Rift, where it could render one eye separate to the other, at 1440p each, and still have memory bandwidth to spare. Things are going to get exciting with the Radeon R9 390X and beyond, but I think that's just the beginning. The R9 490X is going to be when things really kick into second gear, not only spitting on 'next-gen' consoles, but it's going to be when game developers hopefully wake up and realize they should be coding for these GPUs with insane amounts of memory bandwidth, as they're truly next, next-gen technologies.