TweakTown NewsRefine News by Category:
When NVIDIA's founder and CEO Jen-Hsun Huang tease the world with the first GeForce GTX Titan X during the Epic Games GDC 2015 event, the world went crazy. We kind of knew it was coming, but were expecting it to be unveiled at NVIDIA's own GTC 2015 event starting on March 17.
Well, media samples are now spreading across the world with TechGage receiving their sample and taking some up close and personal shots. There's nothing new to tell you here, but we do have a much better look at the card itself. Sitting next to a GeForce GTX 980, we can see in the above shot that the Titan X looks virtually identical from the front (or top in the case of this photo), with the only difference being "GTX 980" and "TITAN" and of course the color scheme has been changed from silver, to a blackish color.
The Titan X has had its backplate removed, which I'm sure is because the GM200 GPU powering the Titan X and its 12GB of VRAM runs much hotter than the GTX 980. The GTX 980's backplate gets ridiculously hot as it is, so I think this is a great move by NVIDIA and something that needed to be done in order to keep the card from getting too hot.
It looks like AMD was using its next generation GPU at the Game Developers Conference, without much fan fare. PC Perspective was there, with Ryan Shrout reporting that AMD was using its next-gen video card to power an Oculus Rift demo.
Shrout met with AMD shortly after, where they told him that the not-so-great looking system was powered by the "upcoming flagship Radeon R9 video card" but nothing else was said. Shrout asked if he could take the side panel off, or look at the driver, or GPU-Z, but he was shut down for every single option.
Then we have KitGuru reporting that AMD will launch its new Radeon 300 series, including the Radeon R9 390X at Computex in Taipei, Taiwan in June. We know that AMD has confirmed new GPUs for Q2 2015, so we should expect an announcement in the next 4-6 weeks, with a full retail launch in June? Surely AMD wouldn't wait until June to launch its new GPU when NVIDIA has just unveiled its GeForce GTX Titan X.
GDC 2015 - Earlier today we were introduced to the GeForce GTX Titan X from NVIDIA, its Maxwell-based card that features a gigantic 12GB of VRAM. We don't know much else about it other than it featuring 12GB of framebuffer and 8 billion transistors, until now.
Legit Reviews spotted NVIDIA's soon to be dominant card at the NVIDIA booth at GDC, housed inside of the beautiful In Win Tou Tempered Glass chassis, where they got a closer look at the GTX Titan X. Thanks to the closer look, we can see that the GTX Titan X is a dual-slot card with a 6-pin and 8-pin PCIe power connectors, and two SLI connectors that mean you could have four of these bad boys in SLI. Yes, four GTX Titan X cards in SLI, for a total of 48GB of VRAM.
With the GeForce GTX Titan Z on the market being a dual-GPU solution, the Titan X is 100% confirmed as a single GPU card which means the single GPU is using all of that 12GB framebuffer. With four of these in SLI, we will be seeing 12GB of VRAM used in total which is much better than the Titan Z, or even the Titan Black Edition cards which have 6GB per GPU.
According to the latest report from Jon Peddie Research (JPR), NVIDIA is dominating the GPU market share game against AMD. JPR's data for Q3 2014 has NVIDIA securing a huge 76% of the GPU market share, leaving AMD with just 24%. Matrox and S3 are now out of the game, with Matrox losing its small 0.10% market share to NVIDIA.
JPR's estimated graphics add-in-board (AIB) shipments and suppliers' market share for the quarter tracks add-in graphics boards, which feature discrete GPUs. These AIBs are used in various devices, such as desktop PCs, workstations, servers, and other devices "such as scientific instruments". JPR's report has found that AIB shipments have decreased by 0.68% from the previous quarter with the total AIB shipments decreasing over the quarter to 12.4 million units.
AMD's quarter-to-quarter total desktop AIB unit shipment has decreased 16%, while NVIDIA's quarter-to-quarter unit shipments increased by 5.5%.
Tom's Hardware has quite the exclusive report, where they're saying that they have a "source with knowledge" on the matter of DirectX 12, that will see that the new API will combine the powers of competing GPUs. In order words, an NVIDIA GeForce GPU will work together in a multi-GPU set up with an AMD Radeon card.
This is something DirectX 12 has on its side with its Explicit Asynchronous Multi-GPU capabilities, which will throw all of the various graphics resources in a system, and into a single "bucket". From there, the game developers will have to work out where the workload will be split, which could see different hardware being used in specific tasks.
One of the major points of this new multi-GPU technology is that multi-GPU configurations will no longer have to mirror their frame buffers, or VRAM. In previous APIs, right up to DX11, you needed two cards of identical VRAM amounts to work in tandem, but only one lot of VRAM is utilized, it's not combined. This is a limitation of rendering an alternate frame (AFR), but DX12 is removing the 4 + 4 = 4 limitation of AFR, replacing it with a new frame method called SFR, or Split Frame Rendering.
We know to expect some big things from DirectX 12, but the more we hear about it, the more we want it now, now, now. AnandTech has completed a deep-dive into the upcoming API from Microsoft, noticing some huge improvements across a range of hardware.
We've written about Brad Wardell, the CEO of Stardock and his impressions of DX12, but he has said that using an "unreleased GPU" he was able to see a huge 100FPS difference between cards. He tweeted that he "did a test of DirectX 11 vs. DirectX 12 on an unreleased GPU with an 8core CPU. DX11: 13fps, DX12: 120fps. Lighting and lens effects".
When pressed, Wardell said he was using a Crossfire system, with an Intel Core i7 CPU. Since he's using an "unreleased GPU" we can gather he might be using the new Radeon R9 390X, which is another nice nugget of information, it means that they're out in the wild. Better yet, Wardell said that "one thing it does make it easy to treat multiple GPUs as a single identity". This is something we reported on not too long ago, where we reported that the VRAM on multi-GPU systems would be seen as one.
AMD is on the verge of announcing and releasing its new Radeon 300 series of cards, but according to a new report from Sweclockers, the codename Fiji GPU will be the only new chip in the Radeon 300 series family. The rest of the cards, will have the current GCN cores, with the GCN 1.1 and GCN 1.2 architecture powering them.
The Radeon R9 390 and R9 390X should feature the new Fiji architecture, with the R9 390 arriving with the Fiji Pro GPU, while the R9 390X will rock the Fiji XT core. When it comes to the Radeon R9 395X2, we don't know if we'll see two of the Fiji XT or Fiji PRO GPUs on it just yet. We do know that we should expect the Radeon R9 390X to feature 4096 cores, 4GB of 4096-bit (1024-bit per channel) HBM memory and hopefully, much more. These new cards will be the first video cards in the world to feature SK Hynix's HBM memory, as well as the first look at the latest GCN 1.3 architecture.
The biggest beast of the new cards will be 'Bermuda' which is the R9 395X2 dual-GPU offering, which should feature the new GCN 1.3 architecture and the super-fast new HBM memory. We don't know what else to expect, but I would like to see AMD make two versions of its R9 390X available: one with 4GB of HBM memory and the other with 8GB of VRAM. Another nice touch would be to have two versions of the R9 395X2: one with 8GB of VRAM (4GB per GPU) and another with 16GB of VRAM (8GB per GPU).
Set for use in mini-ITX applications, the GTX960-MOC-2GD is built to look identical in design to its GTX 670 Mini and GTX 760 Mini older siblings.
Designed with a full-height PCB and measuring just 6.7 inches long, this card features a "CoolTech" fan said to be a hybrid between top-flow and lateral-flow products. Other parts of the cooling design include a dense heat pipe-fed toroidal aluminum fin-stack heat sink which is ventilated by the previously mentioned unit.
Factory-overclocked and with a 1190 MHz core, this card features a 1253 MHz GPU boost and is powered by a single 6-pin PCIe power plug. As for display options, you can expect the general inclusions of: dual-link DVI, HDMI 2.0 and three Displayport 1.2 ports.
There has been no current price or availability listed as of yet.
AMD must be so close to unveiling its next-generation range of GPUs, but the latest information on the Radeon R9 390X has it rocking an all-in-one (AIO) liquid cooler made by Cooler Master.
Cutting to the chase, AMD will reportedly ship its reference Radeon R9 390X with an AIO cooler but AIB partners like SAPPHIRE, XFX and so forth will ship their own coolers. WCCFTech is now reporting that the new Radeon 300 series, and more specifically the flagship Radeon R9 390X will launch in "four to six weeks", which should see it released in late March, or early April.
AMD has said that it's working on something "crazy" for the Game Developers Conference (GDC) 2015, which kicks off in early March. This new information could be true, with AMD showing off its new GPU at GDC 2015, which is incredibly exciting.
Many users are reportedly outraged due to NVIDIA's cancellation of overclocking capabilities for their 900M series - through the latest driver release. Although mobile video cards are not generally overclocked, customers who purchased systems containing GTX 980M GPU's were applying mild overclocks to get the most out of their system.
This driver update is the GeForce R347 (347.29), which has removed its overclocking tools and withdrew support for any third-party tools you may wish to install. Users have been issuing complaints on NVIDIA's official forums, which saw a response from NVIDIA staff. Manuel Guzman replied "unfortunately GeForce Notebooks were not designed to support overclocking. Overclocking is by no means a trivial feature, and depends on thoughtful design of thermal, electrical, and other considerations. By overclocking a notebook, a user risks serious damage to the system that could result in non-functional systems, reduced notebook life, or many other effects."
As seen on HotHardware, Guzman went on to express that allowing mobile GPU overclocking in the first place was a mistake made by his team and should have never been implemented.