If the massive array of Volta GPU architecture leaks weren't enough yesterday, we had teases of the purported Pascal 2.0 refresh coming in 2017, as well as the massively fast graphics cards from NVIDIA sometime in 2018, and beyond - well, now we're hearing more concrete information about the GeForce GTX 1080 Ti... the card everyone has been waiting for.
The new report has NVIDIA releasing the GeForce GTX 1080 Ti in January 2017 (most likely at CES 2017 in early January), with very similar specs to the blazingly fast Pascal-based Titan X. NVIDIA's new GeForce GTX 1080 Ti will reportedly rock most of the same specifications as the Titan X, so we have 12GB of GDDR5X being used at 10Gbps on a 384-bit memory bus that will provide 480GB/sec of memory bandwidth.
NVIDIA's purported GeForce GTX 1080 Ti will have some CUDA cores shaved away, down to 3328 CUDA cores from the 3584 CUDA cores on Titan X. The GP102 GPU used will be clocked at 1503/1623 for Base/Boost clocks, respectively - providing 10.8 TFLOPs of compute performance, all with a 250W TDP. So we're looking at a very close competitor to Titan X, but cheaper.
What about price? I'm expecting NVIDIA to launch the new GeForce GTX 1080 Ti at around $899, but I'm expecting them to drop the price of the Titan X by then, as well as the GTX 1080 and GTX 1070 to put the squeeze on AMD for when it launches Vega a few months later.
We've just heard that NVIDIA will unveil a Pascal refresh in 2017, but the next-gen Volta GPU architecture is what everyone wants to know about on the high-end/enthusiast level of the scale.
Volta is expected to be unveiled at NVIDIA's own GPU Technology Conference, which takes place between May 8-11 next year. NVIDIA is expected to follow in the steps of its Tesla P100 unveiling at GTC 2016 earlier this year, unveiling Volta on a new HPC part first, with the Volta-based HPC product expected to be powered by the ridiculously fast HBM2 tech.
NVIDIA boss Jen-Hsun Huang is also rumored to unveil an updated GPU roadmap, which will include new codenames and technology details for future GPU technology ffrom NVIDIA. NVIDIA is expected to dive away from the 10nm node, and go right into 7nm once Volta is here and established on the 14nm node. Not only that, but the future GPUs will have support for both HBM3 and GDDR6.
NVIDIA is not only expected to unveil a Pascal 2.0 refresh in 2017 on Samsung's 14nm FinFET node, but the company is expected to make a big splash at its GTC 2017 event in May by unveiling their new Volta architecture, and more.
Well, today is the day of NVIDIA leaks, with the company reportedly aiming at a 2018 release for its consumer-focused GeForce graphics cards based on the next-gen Volta GPU architecture. These new cards will reportedly rock a huge 16GB of GDDR6, the new standard from Micron that's quite a large step on the already impressive GDDR5X standard that powers the GeForce GTX 1080 and new Titan X.
Micron's new GDDR6 has over 14Gbps of bandwidth, compared to the 10Gbps on GDDR5 and just 7-8Gbps on GDDR5. GDDR6 is much more efficient than GDDR5X, with lower power consumption allowing for more VRAM on higher-end graphics cards. The new 16GB GDDR6 cards will be based on the upcoming Volta-based graphics cards with a 256-bit memory bus, while the higher-end GV102 will use a faster 384-bit memory bus and possibly 24GB or even 48GB cards thanks to GDDR6.
A treasure trove of details on NVIDIA's GPU plans for 2017 and beyond have surfaced, courtesy of Baidu user USG Ishimura, who said that there is a Pascal refresh on the way, which will be followed by Volta, and so much more.
Starting with the Pascal 2.0 refresh, which will supposedly see more GPUs released on the GP102 core, meaning cheaper versions of the incredibly fast Titan X and GTX 1080 graphics cards. This is similar to what NVIDIA did with the GTX 700 series based on the Kepler architecture. NVIDIA recently inked a deal with Samsung to have their Pascal architecture made on the 14nm FinFET process, with the current Pascal cards being made on the 16nm FinFET node by TSMC.
Micron has been providing the super-fast GDDR5X clocked at 10GHz for the GTX 1080 and Titan X, but the yields didn't start well, and will only improve as we get closer to the purported Pascal refresh. GDDR5X would find its way onto the entire Pascal refresh, except for the entry-level GP107-based cards. NVIDIA has also had its 16nm-based Pascal cards hitting 2GHz GPU clocks easily, so we could see GPU Boost 3.0 keeping the clock speeds higher than we have now, allowing for proper differentiation between AIB partner cards.
AMD has been spreading their new Polaris architecture throughout the different markets that they serve. That started out with the RX 480, a gaming graphics card, and continued into the professional arena with their newly branded Radeon Pro brand of graphics cards. AMD also updated their data center GPUs with Polaris graphics cards with their MxGPU line of products, so the last place for AMD to update their lineup was in embedded.
AMD's embedded graphics products generally tend to sit within a few applications where the company has traditionally done well. These applications include the use of graphics in medical imaging, digital signage and casino gaming. These are all familiar applications for AMD embedded graphics chips, however those use cases are expanding beyond the standard expected applications with these new AMD embedded graphics chips.
NVIDIA used to make graphics cards for Apple and some of their Mac products, but current-gen Mac systems are running AMD Radeon graphics cards, with this possibly changing in the future.
According to a job ad posted by NVIDIA, they are looking for a software engineer who would "help produce the next revolutionary Apple products", reports Bloomberg. The job post continues, adding that the role would require "working in partnership with Apple", as well as writing code that will "define and shape the future"of graphics-related software on Macs.
There also isn't just a single job posting, but three job listings on NVIDIA's database that reference Apple, with the latest posting appearing last week. One of the jobs specifies working with the NVIDIA Mac graphics driver team, which is interesting to see indeed. Apple has always been one to engineer their products to a very high standard, and with NVIDIA continuing to refine their GPU architecture, could we see Pascal or Volta in future Mac systems? I think so.
NVIDIA is reportedly preparing two more mid-range graphics cards, with the GeForce GTX 1050 and GTX 1050 Ti reportedly on their way, with the GTX 1050 rocking 2GB of RAM while the GTX 1050 Ti will run 4GB of VRAM.
The new GeForce GTX 1050 Ti will reportedly rock 768 CUDA cores, 48 TMUs, and 32 ROPs - while the GTX 1050 has less of each, with 640 CUDA cores, 40 TMUs and as for the ROP count, we don't know that just yet. We do know that the GTX 1050 Ti will have 4GB of VRAM while the GTX 1050 has 2GB of VRAM, both on a 128-bit memory bus and a 75W TDP.
The rumors on NVIDIA's new GeForce GTX 1050 Ti and GTX 1050 will have them announced and launched next month.
Following the release of NVIDIA's new GeForce 372.90 drivers tuned for Forza Horizon 3, AMD has released their new Radeon Software Crimson Edition 16.9.2 drivers that also have support for Forza Horizon 3.
There are a bunch of fixed issues as well, as well as known issues that AMD are aware of. The new RSCE 16.9.2 drivers fix a "small amount of corruption" in the lower right hand corner of the display on Radeon HD 7000 series cards in Deus Ex: Mankind Divided, and some stuttering issues with CrossFire mode in DX11 in Mankind Divided.
Here's the full list of fixed issues on RSCE 16.9.2, which you can grab here:
We've just heard some juicy details on AMD's next-gen Vega 10 and Vega 20 graphics cards - with 16GB of HBM2 and 32GB of HBM2, respectively. But it's the news on Navi that has me even more excited, and simultaneously disappointed.
VideoCardz.com's source says that Navi 10 and Navi 11 are "currently planned for 2019", meaning that AMD has delayed Navi by 12 months from its original 2018 release, and now into 2019. We haven't confirmed this yet, but I've sent some emails and will update this post when we hear back from someone at AMD.
We just reported on the first details of AMD's next-gen Vega 10 graphics card, but it seems as though that's the mid-range model, while the higher-end Vega 20 is shaping up to be a damn monster.
AMD's upcoming Vega 20 graphics card will feature 32GB of HBM2 memory with 1TB/sec of memory bandwidth, up from the 16GB of HBM2 with 512GB/sec bandwidth offered on Vega 10. Not only that, but the Vega 20 chip will reportedly support the upcoming PCI-Express 4.0 standard.