NVIDIA is not only expected to unveil a Pascal 2.0 refresh in 2017 on Samsung's 14nm FinFET node, but the company is expected to make a big splash at its GTC 2017 event in May by unveiling their new Volta architecture, and more.
Well, today is the day of NVIDIA leaks, with the company reportedly aiming at a 2018 release for its consumer-focused GeForce graphics cards based on the next-gen Volta GPU architecture. These new cards will reportedly rock a huge 16GB of GDDR6, the new standard from Micron that's quite a large step on the already impressive GDDR5X standard that powers the GeForce GTX 1080 and new Titan X.
Micron's new GDDR6 has over 14Gbps of bandwidth, compared to the 10Gbps on GDDR5 and just 7-8Gbps on GDDR5. GDDR6 is much more efficient than GDDR5X, with lower power consumption allowing for more VRAM on higher-end graphics cards. The new 16GB GDDR6 cards will be based on the upcoming Volta-based graphics cards with a 256-bit memory bus, while the higher-end GV102 will use a faster 384-bit memory bus and possibly 24GB or even 48GB cards thanks to GDDR6.
A treasure trove of details on NVIDIA's GPU plans for 2017 and beyond have surfaced, courtesy of Baidu user USG Ishimura, who said that there is a Pascal refresh on the way, which will be followed by Volta, and so much more.
Starting with the Pascal 2.0 refresh, which will supposedly see more GPUs released on the GP102 core, meaning cheaper versions of the incredibly fast Titan X and GTX 1080 graphics cards. This is similar to what NVIDIA did with the GTX 700 series based on the Kepler architecture. NVIDIA recently inked a deal with Samsung to have their Pascal architecture made on the 14nm FinFET process, with the current Pascal cards being made on the 16nm FinFET node by TSMC.
Micron has been providing the super-fast GDDR5X clocked at 10GHz for the GTX 1080 and Titan X, but the yields didn't start well, and will only improve as we get closer to the purported Pascal refresh. GDDR5X would find its way onto the entire Pascal refresh, except for the entry-level GP107-based cards. NVIDIA has also had its 16nm-based Pascal cards hitting 2GHz GPU clocks easily, so we could see GPU Boost 3.0 keeping the clock speeds higher than we have now, allowing for proper differentiation between AIB partner cards.
AMD has been spreading their new Polaris architecture throughout the different markets that they serve. That started out with the RX 480, a gaming graphics card, and continued into the professional arena with their newly branded Radeon Pro brand of graphics cards. AMD also updated their data center GPUs with Polaris graphics cards with their MxGPU line of products, so the last place for AMD to update their lineup was in embedded.
AMD's embedded graphics products generally tend to sit within a few applications where the company has traditionally done well. These applications include the use of graphics in medical imaging, digital signage and casino gaming. These are all familiar applications for AMD embedded graphics chips, however those use cases are expanding beyond the standard expected applications with these new AMD embedded graphics chips.
NVIDIA used to make graphics cards for Apple and some of their Mac products, but current-gen Mac systems are running AMD Radeon graphics cards, with this possibly changing in the future.
According to a job ad posted by NVIDIA, they are looking for a software engineer who would "help produce the next revolutionary Apple products", reports Bloomberg. The job post continues, adding that the role would require "working in partnership with Apple", as well as writing code that will "define and shape the future"of graphics-related software on Macs.
There also isn't just a single job posting, but three job listings on NVIDIA's database that reference Apple, with the latest posting appearing last week. One of the jobs specifies working with the NVIDIA Mac graphics driver team, which is interesting to see indeed. Apple has always been one to engineer their products to a very high standard, and with NVIDIA continuing to refine their GPU architecture, could we see Pascal or Volta in future Mac systems? I think so.
NVIDIA is reportedly preparing two more mid-range graphics cards, with the GeForce GTX 1050 and GTX 1050 Ti reportedly on their way, with the GTX 1050 rocking 2GB of RAM while the GTX 1050 Ti will run 4GB of VRAM.
The new GeForce GTX 1050 Ti will reportedly rock 768 CUDA cores, 48 TMUs, and 32 ROPs - while the GTX 1050 has less of each, with 640 CUDA cores, 40 TMUs and as for the ROP count, we don't know that just yet. We do know that the GTX 1050 Ti will have 4GB of VRAM while the GTX 1050 has 2GB of VRAM, both on a 128-bit memory bus and a 75W TDP.
The rumors on NVIDIA's new GeForce GTX 1050 Ti and GTX 1050 will have them announced and launched next month.
Following the release of NVIDIA's new GeForce 372.90 drivers tuned for Forza Horizon 3, AMD has released their new Radeon Software Crimson Edition 16.9.2 drivers that also have support for Forza Horizon 3.
There are a bunch of fixed issues as well, as well as known issues that AMD are aware of. The new RSCE 16.9.2 drivers fix a "small amount of corruption" in the lower right hand corner of the display on Radeon HD 7000 series cards in Deus Ex: Mankind Divided, and some stuttering issues with CrossFire mode in DX11 in Mankind Divided.
Here's the full list of fixed issues on RSCE 16.9.2, which you can grab here:
We've just heard some juicy details on AMD's next-gen Vega 10 and Vega 20 graphics cards - with 16GB of HBM2 and 32GB of HBM2, respectively. But it's the news on Navi that has me even more excited, and simultaneously disappointed.
VideoCardz.com's source says that Navi 10 and Navi 11 are "currently planned for 2019", meaning that AMD has delayed Navi by 12 months from its original 2018 release, and now into 2019. We haven't confirmed this yet, but I've sent some emails and will update this post when we hear back from someone at AMD.
We just reported on the first details of AMD's next-gen Vega 10 graphics card, but it seems as though that's the mid-range model, while the higher-end Vega 20 is shaping up to be a damn monster.
AMD's upcoming Vega 20 graphics card will feature 32GB of HBM2 memory with 1TB/sec of memory bandwidth, up from the 16GB of HBM2 with 512GB/sec bandwidth offered on Vega 10. Not only that, but the Vega 20 chip will reportedly support the upcoming PCI-Express 4.0 standard.
As Batman said in Batman v Superman: "well... here I am", and now here we are with details on Vega, teasing us by saying "well... here I am", with the same sly smile Batman had when he first talked to Superman.
Vega 10 will arrive in Q1 2017, with a reported 16GB of HBM2 and 512GB/sec of memory bandwidth. Power wise, the Vega 10-based card should see a 225W TDP, while the dual-GPU based on the Vega architecture will be released in Q2 2017, with a purported TDP of around 300W.
We all know NVIDIA is preparing a GeForce GTX 1080 Ti, but we don't know when the company will announce, or release it. New rumors have emerged, from an enthusiast who spotted the specifications of the GTX 1080 Ti on NVIDIA's own website.
NVIDIA's new GeForce GTX 1080 Ti's leaked specifications teased the GP102 GPU, the same chip that powers the new Titan X, with 3328 CUDA cores and 12GB of GDDR5 RAM. The big thing to note here is that the purported GTX 1080 Ti wouldn't be using the faster GDDR5X that the new Titan X and GTX 1080 use, but it'll rock the 384-bit memory bus that the Titan X uses.
Equipped with the 384-bit memory bus, the purported GTX 1080 Ti would have 384GB/sec of memory bandwidth - sitting in between the Titan X and its 480GB/sec, and the GTX 1080 with its 320GB/sec - the GTX 1080 uses a 256-bit memory bus, while the Titan X uses a 384-bit memory bus. The GPU would be clocked at 1503/1623MHz for base and boost clocks, respectively - with a 250W TDP.
Now, for the price - this isn't confirmed and just an estimate from me, but I think we'll see it priced at $899. At this price, it's $200 more than the GeForce GTX 1080 Founders Edition, and it's $300 cheaper than the Titan X. It puts it right in the middle, and it should be cheaper thanks to its use of GDDR5 over GDDR5X - which will save costs for NVIDIA, but the 384-bit memory bus puts it closer to Titan X performance territory, and that's something NVIDIA has to tack onto the end of the price.