TweakTown NewsRefine News by Category:
Prepare for sadness: AMD and NVIDIA's upcoming next-gen GPUs could be delayed, at least the 16nm or 20nm based versions of them, according to a new report from WCCFTech.
The site is reporting that TSMC may be rolling out its 20nm technology, but TSMC is busy filling orders for Qualcomm and Apple. Flagship GPU dies are much, much larger than the likes of system-on-chips (SoCs) that Qualcomm and Apple require, so that leaves Apple and NVIDIA with a very limited supply of 20nm dies.
What does this mean for AMD and NVIDIA's future GPUs? NVIDIA is already sailing quite well with its efficient Maxwell architecture, where even on the now ageing 28nm it is pulling some seriously good numbers in both camps: performance and power efficiency. AMD is most likely the next up for a GPU refresh, but it looks like the company is going to have to side on the 28nm fence, which should mean its upcoming next-gen architecture should be quite impressive.
It looks like we're seeing the beginnings of SAPPHIRE's new GPU, which is going under the guise of 'Project NFC.' What is Project NFC? Well, NFC stands for Not From Concentrate, which is a term used in the food industry for when water hasn't been extracted from the juice of a fruit. In SAPPHIRE's tease, it is more of a pure and unaltered version, which could be a liquid cooled video card - exciting.
The company hasn't coming out and said that Project NFC is an actual liquid cooled AMD card, but SAPPHIRE did hint at it on its video on YouTube. We have possible drawings of a GPU block, and water vapor as you can see below.
The next question is, is this the Radeon R9 380X? Or would SAPPHIRE liquid cool one of its existing cards? Whatever happens, it's SAPPHIRE: which means we know it's going to be good, very good.
GIGABYTE and EVGA have been on the market with external VRM solutions for serious GPU overclocking, but it looks like ASUS is stepping into the game with its own external voltage regulator module, or VRM.
ASUS has already deployed GPUs with advanced 10-phase and 14-phase VRMs in the form of the ASUS Strix GeForce GTX 980 and the ASUS ROG Matrix Platinum GTX 980, but this is an entire new ballgame. The ASUS GPUs with the advanced VRMs still have limitations as they're consumer GPUs, with certain restrictions that stop too much power flowing through the card. The external VRM card will allow for far higher voltages being pushed through the GPU and memory, which should unlock some massive potential for extreme overclockers.
The VRM card in question features a "single 8-phase output with output voltage of up to 2.5V (with output voltage offset switches [+0.4V, +0.3V, +0.2V, +0.1V]) and current up to 500A. The card has on-board voltage control/monitoring, output current monitoring, VRM temperature monitoring, load-line calibration (0%, 60%, 80%, 100%), hotwire setting/monitoring and other features required by extreme overclockers. The board sports four six-pin PCIe (4*75W) input power connectors, which means that it can deliver up to 300W of power to the graphics board, enough power to break world's records" reports Anton Shilov from KitGuru.
According to "AMD_Chris" on various forums, AMD is working on an impressive new feature dubbed "Dynamic Frame Rate Control". DFRC would allow gamers to put a lock on the total frame rate their video card can render, which can result in a huge amount of power savings.
The feature would most likely see AMD variably adjusting the clock speeds of the cards in order to hit the desired frame rate, such as 60FPS. It might sound like V-Sync, but it's nothing like it as DFRC stops your GPU from cranking things up internally to render 100FPS, when you're only receiving as much as your monitor can put out, which is 60Hz, or 60FPS most of the time.
DFRC will underclock your GPU once you hit 60FPS (or whatever frame rate you choose), allowing the card to not pull as much power from the wall. AMD_Chris says that "the power savings were mind blowing" and we would agree, if your card is rendering 120FPS+ in a more basic game and you've got DFRC set to 60FPS, the power savings would be fairly large. We can't wait to test this new feature, that's for sure - what about you?
According to the latest rumors, the upcoming NVIDIA GeForce GTX Titan II has been benchmarked against the Radeon R9 380X and R9 390X. These are just rumors right now, as all three of these cards have not been announced yet. Even the name Titan II could change to Titan II, or Titan Extreme. AMD on the other hand could change their cards to 399X, or 400X or anything out of the norm from what to expect.
The details on the cards from the poster on Chiphell state that the Fiji XT and Bermuda XT (380X and 390X respectively), will be manufactured on Globalfoundries' 20nm process. The Titan II on the other hand will arrive as the GM200, which will supposedly come in two flavors: a cut down version with 21 MM units and a total of 2688 CUDA cores. A "full fat" version will arrive at a later date with an increased amount of SMMs and CUDA cores.
When it comes to performance, the GM200-powered GPU from NVIDIA will be 34% faster than the GeForce GTX 980. On the AMD side of things, the R9 390X will be a huge 65% faster than the R9 290X, which will feature liquid cooling that was rumored all those months ago. Performance wise, the next-gen GPUs are packing a massive punch.
GIGABYTE surprised the world with its super impressive GeForce GTX 980 WaterForce 3-Way SLI Kit, but now it is finally beginning to sell the best Christmas present ever. The kit is listed on Newegg, with a price tag of $2999.
For $3000, you get three NVIDIA GeForce GTX 980 GPUs, and an external liquid cooling box (with three 120mm radiators and fans). Breaking it down, the GPUs are around $550 each or $1650 for three. This means that the external liquid cooling box and all of the associated work GIGABYTE has pumped into this product, is costing a $1350 premium. This is a massive ask, but for some people who want the ultimate in 3-way SLI, it's a nice option.
The latest rumors for the mid-range GPU market have NVIDIA announcing their GeForce GTX 960 video card at CES 2015, which is just a couple of weeks away.
NVIDIA's GeForce GTX 960 would reportedly feature the GM206 core, 4GB of GDDR5 RAM on a 256-bit memory bus, a 944MHz Core and 6GHz Memory clock. Not much else is known on this mid-range GPU, but if NVIDIA do launch it, and launch it with a super competitive price of $199 or $249, this could really be the GPU to buy early next year for most gamers.
We've just reported that AMD has launched its new Catalyst Omega driver suite, where we've written up an overview on the slew of new features and technologies it has unleashed, and a look at performance at 1080p, but what about Eyefinity?
AMD increased the abilities of Eyefinity with the new drivers, which now allow 24 monitors to be supported on Windows. You don't need any third-party software, or hardware, with the company providing a new GUI for setting up your new 24 monitor Eyefinity rig. You'll need four Radeon GPUs with six DisplayPort outputs per card, but it can be done.
TSMC is holding its annual supply chain forum sometime later today, where it's expected to unveil its new work on 16nm and 10nm node technologies. DigiTimes reports: "TSMC's 16nm FinFET process has passed full reliability qualification, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015, the company announced previously. TSMC expects to move the node to volume production around July 2015".
NVIDIA is part of this meeting, but its competitor AMD, is not. This is an interesting part of the story, as AMD looks like it could coming out swinging with its Radeon R9 390X, but what process will it be made on, and by who? With NVIDIA looking to shift its already very efficient second-generation Maxwell architecture on the 28nm node, over the 20nm node and directly to the even smaller 16nm process, we could see quite a large jump in what NVIDIA could offer with what could eventually be known as the GeForce GTX 1080.
I personally think NVIDIA could rename, rebrand, or change things up with the next GPU launch. Moving its powerful, but energy efficient Maxwell architecture to a smaller process, could result in a massive change for NVIDIA. If we're already seeing such an improvement from the GeForce GTX 780 Ti to the GeForce GTX 980 on the same process, what can NVIDIA do when they're working with 16nm? Would it simply arrive as the GeForce GTX 980 Ti? Or would NVIDIA have enough power to really ramp things up, providing maybe an entire generational jump in performance, eclipsing anything AMD can do, or retaliate with, and change things up with a rebrand, or a new number scheme of GeForce GPUs?
ASUS has been doing some good things with its smaller video cards, but the latest one could be one of the hot-sellers during the holiday season. The new ASUS GeForce GTX 970 DirectCU Mini is based off of NVIDIA's second-generation Maxwell architecture, with the GTX 970 at the center of it.
On top of that, we have a 17cm long video card with the DirectCU II cooling from ASUS, using a single fan. It uses hot plate technology to improve thermal efficiency, which provides 20% more cooling than traditional, reference coolers. We also have Core Clocks of 1088MHz, while Boost Clocks are set at 1228MHz. We also have 4GB of RAM at its stock frequency of 7010MHz.
Connectivity wise we have one dual-link DVI port, a DVI-I port, HDMI 2.0, and DisplayPort 1.2. The ASUS GeForce GTX 970 DirectCU Mini requires just a single 8-pin PCIe power connector, which will be a perfect fit to Mini-ITX gaming systems. When it comes to pricing, we should expect similar pricing to GIGABYTE's GeForce GTX 970 ITX, which is on Amazon right now for $339.