TweakTown NewsRefine News by Category:
GDC 2015 - Earlier today we were introduced to the GeForce GTX Titan X from NVIDIA, its Maxwell-based card that features a gigantic 12GB of VRAM. We don't know much else about it other than it featuring 12GB of framebuffer and 8 billion transistors, until now.
Legit Reviews spotted NVIDIA's soon to be dominant card at the NVIDIA booth at GDC, housed inside of the beautiful In Win Tou Tempered Glass chassis, where they got a closer look at the GTX Titan X. Thanks to the closer look, we can see that the GTX Titan X is a dual-slot card with a 6-pin and 8-pin PCIe power connectors, and two SLI connectors that mean you could have four of these bad boys in SLI. Yes, four GTX Titan X cards in SLI, for a total of 48GB of VRAM.
With the GeForce GTX Titan Z on the market being a dual-GPU solution, the Titan X is 100% confirmed as a single GPU card which means the single GPU is using all of that 12GB framebuffer. With four of these in SLI, we will be seeing 12GB of VRAM used in total which is much better than the Titan Z, or even the Titan Black Edition cards which have 6GB per GPU.
GDC 2015 - AMD has just jumped into the VR game with the announcement of Liquid VR, a new software development kit that wants to see the presence in VR improved.
One of the biggest problems with the future of VR gaming is the latency - the time in which moving your head, and seeing that movement happen in the virtual world, is measured. This needs to be as small as possible, with the hope of removing it completely. In order to achieve this, the software and GPU need to be tweaked to the max.
AMD has teased that multi-GPU, hardware-accelerated time warp, and direct-to-display technologies within its Liquid VR. Starting with hardware-accelerated time warp, which uses updated information to the user wearing the VR headset, and the position of their head after a frame has been rendered, and then it warps the image to reflect the new viewpoint as soon as it sends it to the headset.
According to the latest report from Jon Peddie Research (JPR), NVIDIA is dominating the GPU market share game against AMD. JPR's data for Q3 2014 has NVIDIA securing a huge 76% of the GPU market share, leaving AMD with just 24%. Matrox and S3 are now out of the game, with Matrox losing its small 0.10% market share to NVIDIA.
JPR's estimated graphics add-in-board (AIB) shipments and suppliers' market share for the quarter tracks add-in graphics boards, which feature discrete GPUs. These AIBs are used in various devices, such as desktop PCs, workstations, servers, and other devices "such as scientific instruments". JPR's report has found that AIB shipments have decreased by 0.68% from the previous quarter with the total AIB shipments decreasing over the quarter to 12.4 million units.
AMD's quarter-to-quarter total desktop AIB unit shipment has decreased 16%, while NVIDIA's quarter-to-quarter unit shipments increased by 5.5%.
NVIDIA's CEO and founder Jen-Hsun Huang has written on the company's official blog addressing the issue of the GeForce GTX 970 and its 4GB of VRAM. Huang says early on in the blog post: "We invented a new memory architecture in Maxwell. This new capability was created so that reduced-configurations of Maxwell can have a larger framebuffer - i.e., so that GTX 970 is not limited to 3GB, and can have an additional 1GB".
He adds that the GTX 970 is a 4GB card, and that the upper 512MB of its 4GB of frame buffer is "segmented and has reduced bandwidth". Huang elaborates, saying "This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment". But, he acknowledges that this wasn't all good news, as the company "failed to communicate this internally to our marketing team, and externally to reviewers at launch".
"Instead of being excited that we invented a way to increase memory of the GTX 970 from 3GB to 4GB, some were disappointed that we didn't better describe the segmented nature of the architecture for that last 1GB of memory", explaining the 4GB of VRAM issue on the GTX 970 in more detail, "This is understandable. But, let me be clear: Our only intention was to create the best GPU for you. We wanted GTX 970 to have 4GB of memory, as games are using more memory than ever". Huang added: "The 4GB of memory on GTX 970 is used and useful to achieve the performance you are enjoying. And as ever, our engineers will continue to enhance game performance that you can regularly download using GeForce Experience".
With the release of AMD's Radeon 300 series right around the corner, and the tease of its upcoming Fiji-based Radeon R9 390X flagship video card, it's time to start speculating on what we can expect in regards to VRAM on the new GPUs.
We have heard that AMD will be using High Bandwidth Memory, or HBM, on the flagship R9 390X. A report over at Fudzilla points out that AMD is using something called 2.5D-IC silicon interposer, which will see "two separate chips on the same silicon interposer and package substrate". AMD is baking this onto a PCB on the 28nm process, but there will be two products on offer. One without HBM, and the other with HBM.
HBM 1.0 is currently limited to 1GB per stack configured as 4 x 2Gb layers for a total of 4GB of VRAM, which should raise some very serious questions. Throwing to the side memory bandwidth and the node AMD chooses to use (with all signs pointing to 28nm), a limit of 4GB of VRAM could hurt the company with the first new GPU it has released in over 18 months. Considering the issues NVIDIA has been going through with its GTX 970 and "4GB" of VRAM argument, AMD have the opportunity to really drive home the VRAM argument.
Tom's Hardware has quite the exclusive report, where they're saying that they have a "source with knowledge" on the matter of DirectX 12, that will see that the new API will combine the powers of competing GPUs. In order words, an NVIDIA GeForce GPU will work together in a multi-GPU set up with an AMD Radeon card.
This is something DirectX 12 has on its side with its Explicit Asynchronous Multi-GPU capabilities, which will throw all of the various graphics resources in a system, and into a single "bucket". From there, the game developers will have to work out where the workload will be split, which could see different hardware being used in specific tasks.
One of the major points of this new multi-GPU technology is that multi-GPU configurations will no longer have to mirror their frame buffers, or VRAM. In previous APIs, right up to DX11, you needed two cards of identical VRAM amounts to work in tandem, but only one lot of VRAM is utilized, it's not combined. This is a limitation of rendering an alternate frame (AFR), but DX12 is removing the 4 + 4 = 4 limitation of AFR, replacing it with a new frame method called SFR, or Split Frame Rendering.
The VRAM controversy over NVIDIA's GeForce GTX 970 continues, where class action lawsuits have now been filed late last week. You might remember this only happened a few weeks ago, with NVIDIA quick to jump and admit it falsely advertised the GTX 970 and quickly refreshed the official specifications of the Maxwell-powered GPU.
There are two lawsuits at the moment, one for NVIDIA and the other targeting GIGABYTE. The plaintiff is suing NVIDIA and GIGABYTE on behalf of all GeForce GTX 970 owners, making this a class action lawsuit. This means that any GeForce GTX 970 owner can jump into this class action lawsuit. There are four major complaints for which the plaintiff is asking for damages over, including:
- #1 Unfair business practices.
- #2 Deceptive business practices.
- #3 Unlawful business practices.
- #4 Misleading advertising.
It seems GIGABYTE is being sucked into this lawsuit by Andrew Ostrowski, the plaintiff, as he purchased two of GIGABYTE's GeForce GTX 970 video cards. We will eventually see most of NVIDIA's other add-in-board (AIB) partners enter the class action lawsuit, as more and more people jump on board.
We know to expect some big things from DirectX 12, but the more we hear about it, the more we want it now, now, now. AnandTech has completed a deep-dive into the upcoming API from Microsoft, noticing some huge improvements across a range of hardware.
We've written about Brad Wardell, the CEO of Stardock and his impressions of DX12, but he has said that using an "unreleased GPU" he was able to see a huge 100FPS difference between cards. He tweeted that he "did a test of DirectX 11 vs. DirectX 12 on an unreleased GPU with an 8core CPU. DX11: 13fps, DX12: 120fps. Lighting and lens effects".
When pressed, Wardell said he was using a Crossfire system, with an Intel Core i7 CPU. Since he's using an "unreleased GPU" we can gather he might be using the new Radeon R9 390X, which is another nice nugget of information, it means that they're out in the wild. Better yet, Wardell said that "one thing it does make it easy to treat multiple GPUs as a single identity". This is something we reported on not too long ago, where we reported that the VRAM on multi-GPU systems would be seen as one.
AMD is on the verge of announcing and releasing its new Radeon 300 series of cards, but according to a new report from Sweclockers, the codename Fiji GPU will be the only new chip in the Radeon 300 series family. The rest of the cards, will have the current GCN cores, with the GCN 1.1 and GCN 1.2 architecture powering them.
The Radeon R9 390 and R9 390X should feature the new Fiji architecture, with the R9 390 arriving with the Fiji Pro GPU, while the R9 390X will rock the Fiji XT core. When it comes to the Radeon R9 395X2, we don't know if we'll see two of the Fiji XT or Fiji PRO GPUs on it just yet. We do know that we should expect the Radeon R9 390X to feature 4096 cores, 4GB of 4096-bit (1024-bit per channel) HBM memory and hopefully, much more. These new cards will be the first video cards in the world to feature SK Hynix's HBM memory, as well as the first look at the latest GCN 1.3 architecture.
The biggest beast of the new cards will be 'Bermuda' which is the R9 395X2 dual-GPU offering, which should feature the new GCN 1.3 architecture and the super-fast new HBM memory. We don't know what else to expect, but I would like to see AMD make two versions of its R9 390X available: one with 4GB of HBM memory and the other with 8GB of VRAM. Another nice touch would be to have two versions of the R9 395X2: one with 8GB of VRAM (4GB per GPU) and another with 16GB of VRAM (8GB per GPU).
Set for use in mini-ITX applications, the GTX960-MOC-2GD is built to look identical in design to its GTX 670 Mini and GTX 760 Mini older siblings.
Designed with a full-height PCB and measuring just 6.7 inches long, this card features a "CoolTech" fan said to be a hybrid between top-flow and lateral-flow products. Other parts of the cooling design include a dense heat pipe-fed toroidal aluminum fin-stack heat sink which is ventilated by the previously mentioned unit.
Factory-overclocked and with a 1190 MHz core, this card features a 1253 MHz GPU boost and is powered by a single 6-pin PCIe power plug. As for display options, you can expect the general inclusions of: dual-link DVI, HDMI 2.0 and three Displayport 1.2 ports.
There has been no current price or availability listed as of yet.