Video Cards & GPUs News - Page 316
TSMC is holding its annual supply chain forum sometime later today, where it's expected to unveil its new work on 16nm and 10nm node technologies. DigiTimes reports: "TSMC's 16nm FinFET process has passed full reliability qualification, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015, the company announced previously. TSMC expects to move the node to volume production around July 2015".
NVIDIA is part of this meeting, but its competitor AMD, is not. This is an interesting part of the story, as AMD looks like it could coming out swinging with its Radeon R9 390X, but what process will it be made on, and by who? With NVIDIA looking to shift its already very efficient second-generation Maxwell architecture on the 28nm node, over the 20nm node and directly to the even smaller 16nm process, we could see quite a large jump in what NVIDIA could offer with what could eventually be known as the GeForce GTX 1080.
I personally think NVIDIA could rename, rebrand, or change things up with the next GPU launch. Moving its powerful, but energy efficient Maxwell architecture to a smaller process, could result in a massive change for NVIDIA. If we're already seeing such an improvement from the GeForce GTX 780 Ti to the GeForce GTX 980 on the same process, what can NVIDIA do when they're working with 16nm? Would it simply arrive as the GeForce GTX 980 Ti? Or would NVIDIA have enough power to really ramp things up, providing maybe an entire generational jump in performance, eclipsing anything AMD can do, or retaliate with, and change things up with a rebrand, or a new number scheme of GeForce GPUs?
ASUS has been doing some good things with its smaller video cards, but the latest one could be one of the hot-sellers during the holiday season. The new ASUS GeForce GTX 970 DirectCU Mini is based off of NVIDIA's second-generation Maxwell architecture, with the GTX 970 at the center of it.
On top of that, we have a 17cm long video card with the DirectCU II cooling from ASUS, using a single fan. It uses hot plate technology to improve thermal efficiency, which provides 20% more cooling than traditional, reference coolers. We also have Core Clocks of 1088MHz, while Boost Clocks are set at 1228MHz. We also have 4GB of RAM at its stock frequency of 7010MHz.
Connectivity wise we have one dual-link DVI port, a DVI-I port, HDMI 2.0, and DisplayPort 1.2. The ASUS GeForce GTX 970 DirectCU Mini requires just a single 8-pin PCIe power connector, which will be a perfect fit to Mini-ITX gaming systems. When it comes to pricing, we should expect similar pricing to GIGABYTE's GeForce GTX 970 ITX, which is on Amazon right now for $339.
AMD's Radeon R9 295X2 is still a champion of a GPU, even up against the likes of the single-GPU card from NVIDIA in the form of the GeForce GTX 980. Better yet, the R9 295X2 can now be found on Amazon for as low as $679, down from its introductory price of $1499.
The XFX Radeon R9 295X2 is currently $679 on Amazon, down from $1039 - a saving of $360. This is not too bad at all, especially when you consider you're getting two GPUs here. This card is still a great card for high-res gaming, especially 4K and beyond, and perfect for smaller systems where you can only install the single card.
Some unofficial benchmarks have surfaced on the alleged AMD Radeon R9 300 series GPU, something that is arriving as "Captain Jack" from the Pirate Islands family of products.
The GPU was benchmarked against a slew of other cards, on a bunch of different games. These games were all benchmarked, which provided a heavily-stacked average FPS, versing this mystery card against a bunch of other cards on the market. You can see in the image above, that the "Captain Jack" sample kicks some serious ass with 65.6FPS average, against the 50.1FPS average of the R9 290X and the 56.6FPS average of NVIDIA's GeForce GTX 980.
But these days, it's not all about brute performance, but performance-per-watt. The "Captain Jack" sample GPU is doing quite well here too, with load power consumption of an average of 197W. This is compared to the average power consumption of the GeForce GTX 980 pulling 185W, while the R9 290X pulls much more, at 279W. Whatever this card is, if it turns out to be true, AMD could really stomp back into the market with a card that goes toe-to-toe with the GeForce GTX 980, and some.
We already know that the Radeon R9 390X is going to be a special card, where according to rumors we can expect it to arrive with a 4096-bit memory bus, made possible with High Bandwidth Memory, or HBM. This card should have memory bandwidth of a truly next-gen 640GB/sec. But, what comes after that, is going to be even better.
The first generation of HBM, which SK Hynix is now shipping, provides 2Gb per DRAM die, 1Gbps speed per pin, 128GB/sec of bandwidth and is stackable in groups of four. This results in 4 x 128GB/sec = 512GB/sec, with another 128GB/sec coming from somewhere else on the card. But, the second-generation HBM is going to be lightning quick, even compared to the already-damn-impressive first-gen HBM tech.
The second-gen HBM technology will allow for 8Gb per DRAM die, up from 2Gb on the first-gen HBM for starts. This means we should see GPUs with 8-16GB of VRAM on-board. Second, the bandwidth increases to 256GB/sec (up from 128GB/sec) which should arrive as around 1.28TB/sec of available memory bandwidth. Considering NVIDIA's second-generation Maxwell-based GeForce GTX 980s only have 224GB/sec of memory bandwidth on a 256-bit bus, the 4096-bit wide memory bus with 1.28TB/sec of bandwidth will surely do some insane things, especially at Ultra HD and beyond resolutions.
The last rumor we had to report on about AMD was the exciting new HBM-based, 4096-bit wide memory bus Radeon R9 390X, but now we're going to tell you a little about the lower-end cards that will most likely find their way to consumers in the first half of 2015.
These new cards will reportedly be AMD's new MXM-based GPUs, arriving as the Litho XT and Strato PRO. These two new mystery cards have just reached Zauba.com, with the Litho and Strato most likely standing for the space-driven names of Lithosphere and Stratosphere. Starting with the Litho XT, which will be a Type A MXM GPU with 2GB of GDDR5 memory, while the bigger Strato PRO will be a Type B MXM GPU with 4GB of RAM.
The XT side of things will be the powerful card, while the PRO will have a cut die. We could also see a Strato XT in the future, but we won't know anything until the new rumors start floating online.
NVIDIA has launched its new Tesla K80 compute card at the Supercomputing 2014 (SC14) conference in New Orleans, which is a dual GPU design, cramming in an insane amount of compute power into a single card.
One of our friends, Anshel Sag, over at Bright Side of News reported the news, explaining that "Logically, you would think that the K80 would naturally be two K40's smacked together into a single card, but that's not accurate. In order to build the K80, NVIDIA actually went with GPUs with similar shader core counts as the Tesla K20, but what's most important is that they actually did double the onboard memory of the K80 from the K40 to 24 GB of GDDR5". 24GB. OF. RAM.
NVIDIA is using 12GB of GDDR5 per GPU, for a total of 24GB of RAM. We have 4992 shader cores, with the company using two GK210 GPUs, instead of the GK110B's found on the Tesla K40. NVIDIA is claiming that the new Tesla K80 is capable of 8.74 teraflops single-precisiion, and 2.91 teraflops double-precision. These numbers are over double that of the K40.
Today is the day for next-gen GPU news, where we just reported about AMD's Radeon R9 390X to feature an insane 4096-bit memory bus. Now, we're hearing about NVIDIA's new GeForce GTX Titan II, a beast of a GPU that would feature 12GB of VRAM.
This new beast would be baked on the GM200 silicon, on the 28nm process, feature 3072 CUDA cores, a 384-bit memory bus, and the aforementioned 12GB of VRAM. When it comes to clock speeds, we should see a Core clock of 1100MHz, while the Boost clock sits at 1390MHz and the memory at 6GHz. With the GTX 980 having 2048 CUDA cores, the GTX Titan II based on the current rumors has over 1000 additional CUDA cores.
We should see a David vs Goliath battle early next year for GPU supremacy between AMD and NVIDIA, it's just too bad that today's games run like absolute crap, even on great hardware.
Here we are again, another post about the AMD Radeon R9 390X, but this time we have some even more exciting news. There's more details leaking out on the codename Fiji XT board, thanks to a listing in the SiSoft Sandra benchmark database.
If the details are true, and right now we're classing them as rumors, we could expect the Radeon R9 390X to feature a whopping 4096 stream processors, with 64 compute units and 256 texture units - but the best bit, we've saved until last: a massive, game-changing 4096-bit memory bus. This insane jump in memory bus specs will have been a result of the HBM technology we reported on a little while ago now, with a 1024-bit input/output interface.
We should see the first R9 390X cards with 4GB of RAM, but 8GB of VRAM will be seen on these GPUs very quickly. If we do see four, first-generation HBM DRAM chips with a 4096-bit memory bus operating at around 1.25GHz, we can expect a huge memory bandwidth of 640GB/sec. Considering the still-fresh, and kick-ass NVIDIA GeForce GTX 980 has a 256-bit memory bus with 224GB/sec of bandwidth, AMD would be killing it at high-res. 4K and beyond would see AMD leaping ahead, which is something they need right now.
NVIDIA already has two great GPUs in its GeForce GTX 970 and GTX 980 cards, but the mid-range market needs some lovin', too. This is where the GeForce GTX 960 comes into play, and according to the latest rumors, it won't be as cut down as previous generations have been.
The GeForce GTX 960 will reportedly maintain its GM204 core, as well as the 256-bit memory bus and 4GB of GDDR5 memory that both of its bigger brothers have. The GTX 960 will reportedly be clocked at 933MHz on the Core, and most likely 1408 Stream Processors, with 88 Texture Units behind it.
When it comes to pricing, we should expect some cut throat pricing from NVIDIA, with a price of under $249 in the US. When will it be unveiled? We should see it coming into the world in Q1 2015, so we don't have much longer to wait.