Video Cards News - Page 3
Sony still hasn't given us much details on the inner workings of its next-gen PlayStation 5 console, but a new patent Sony filed in 2019 is interesting.
The new patent was filed in 2019 but only published a few days ago on July 23, and discuses image reconstructing technology that uses reference images through machine learning. You know, kinda like how NVIDIA's own Deep Learning Super Sampling -- or DLSS, works.
This is exciting as NVIDIA DLSS 2.0 provides double the performance in games that support it, while not looking any worse -- and in some cases they look better.
The patent reads: "An information processing device for acquiring a plurality of reference images obtained by imaging an object that is to be reproduced, acquiring a plurality of converted images obtained by enlarging or shrinking each of the plurality of reference images, executing machine learning using a plurality of images to be learned, as teaching data, that include the plurality of converted images, and generating pre-learned data that is used for generating a reproduction image that represents the appearance of the object".
NVIDIA has just released a new hotfix driver, something that fixes some of the issues that were found in the recent GeForce 451.67 WHQL Game Ready drivers.
The new GeForce 451.85 drivers solve some problems in Hideo Kojima's masterpiece Death Stranding, something that I checked out in 8K with DLSS 2.0 on with some mind-blowing results.
Not just that, but the new hotfix drivers provide stability to Shadow of the Tomb Raider when you're running it in DX12 with hardware-accelerated GPU scheduled enabled. Death Stranding gets some texure corruption problems fixed on GeForce RTX 20 series and GeForce GTX 16 series graphics cards.
NVIDIA has its new Ampere GPU architecture powering the new benchmark crushing Ampere A100 accelerator, and soon to be Ampere-based GeForce RTX 3000 series -- AMD has its RDNA 2 architecture right around the corner and APUs inside of next-gen PlayStation 5 and Xbox Series X... while Intel has, well lots of pluses -- and no GPU.
Intel is expected to finally have something to share on its Xe graphics according to a quickly-deleted tweet. Intel posted the tweet to its official Intel Graphics Twitter account, where it said: "You've waited. You've wondered. We'll deliver. In 20 days, expect more details on Xe graphics".
Keep in mind that Intel has a few events it will be (virtually) attending, with David Blythe, Senior Fellow and Director of Graphics Architecture at Intel delivering a speech at Hot Chips on August 17 all about the Xe GPU architecture.
NVIDIA announced its Ampere GPU architecture with the introduction of the Ampere A100 accelerator earlier this year, the company's first 7nm GPU -- and also its first PCIe 4.0 card too. But what good is a next generation GPU if it hasn't been benchmarked yet?
This is where Jules Urbach, the CEO of OTOY -- a cloud graphics company famous for its Octane Render software, has benchmarked NVIDIA's new A100 accelerator. The new Ampere-based NVIDIA A100 accelerator was benchmarked on OctaneBench -- a benchmark designed to test the performance in OctaneRender.
OTOY benchmarked the GA100-based card which consists of 6912 CUDA cores, and 40GB of super-fast HBM2 memory.
I thought it was known from previous reports, but yeah -- NVIDIA will be unveiling its new Ampere-based GeForce RTX 3000 range of graphics cards in August, with a September launch into gamers' hands.
The news is coming once again from Overclocking.com and other sources, but it's something that I revealed back in March 2020 in an article titled "NVIDIA GeForce RTX 3080: August 2020 reveal, launch at Computex 2020" which you can read here.
Videocardz points out that there were people after me saying that an August reveal and September launch would happen, and now even more outlets are reporting it so it seems like we're ramping right into a September launch for NVIDIA's next-gen GeForce RTX 3000 series graphics cards -- exciting times, people!
I have a feeling this is how it's going to be for the next few months: leak after leak, drip after drip of information on AMD and NVIDIA's next generation graphics cards.
The first one I'm writing today is some new details on AMD's upcoming Big Navi aka RDNA 2 but aka as an NVIDIA killer. AMD's upcoming flagship Big Navi graphics card is expected to make its big debut with a huge 16GB of VRAM -- but we don't know if it's GDDR6, or HBM2 / HBM2e memory.
AMD's new flagship Big Navi card should have 16GB of VRAM on a 384-bit bus according to leaker 'Wjm47196' suggesting AMD will go with GDDR6 over HBM2 or the newer HBM2e standard.
We have another new rumor to share on Ampere GPU performance, this time with the GeForce RTX 3080 and insider KatCorgi on Twitter who tweeted:
This about lines up with previous estimates, with the new rumor suggesting the GeForce RTX 3080 is around 20% faster than the GeForce RTX 2080 Ti. There is nothing specific to go from here, but it lines up with the 40-50% upgrade that the RTX 3080 Ti / RTX 3090 is meant to bring over the RTX 2080 Ti.
If it's a raw 20% improvement over the RTX 2080 Ti without running magic DLSS 3.0 technology or something, then that will be a nice upgrade over the RTX 2080 Ti -- except I'm sure the RTX 3080 will deploy with 10GB of GDDR6 (and a much higher TDP at 300W or over).
A bunch of hot RDNA 2 information dropped today from Tom on Moore's Law is Dead, with his sources telling him that AMD's next-gen Big Navi will offer up to 225% more performance over RDNA... the GPU that powered the Radeon RX 5700 XT.
According to these sources estimates are that RDNA is a "major leap forward for them 195% to 225% of the current available cards. But their internal estimates are still projecting the performance per dollar cards up to the upper end of the range".
Another tease is that the "top RDNA 2" would feature 72 compute units, which is where the leaks on the flagship RDNA 2 graphics card being 40-50% faster than NVIDIA's current-gen flagship GeForce RTX 2080 Ti graphics card.
We heard about NVCache back in May 2020 with some Ampere leaks, with NVIDIA's purported NVCache acting like HBCC -- HBCC being AMD's own High Bandwidth Cache Controller that debuted with Vega.
Sony is using some truly next-gen storage technology inside of its PlayStation 5 console, with a deep dive on that here. Microsoft will be doing the same with their Xbox Series X console, but what about the PC? AMD will surely have something nifty up their sleeve with RDNA 2 -- while NVIDIA has NVCache.
Well, back in the May leaks about Ampere and NVCache we heard that it "leverages both your DDR & SSD for enhanced load times & VRAM". The new leaks from Moore's Law is Dead tease that NVIDIA's upcoming NVCache on Ampere cards in the GeForce RTX 3000 series family will "dynamically utilize bandwidth from SSDs, VRAM, and DDR for multiple tasks at the same time".
NVIDIA has some near GPU cheat codes when it comes to its DLSS technology (Deep Learning Super Sampling), with DLSS 2.0 providing double the performance in Hideo Kojima's Death Stranding.
But according to Moore's Law is Dead, DLSS 3.0 is going to be an even bigger game changer. In his latest video, Tom talks about some of the other Ampere GPU features (you can read more on the 3-4x ray tracing performance on the GeForce RTX 3080 Ti right here) and one of them is DLSS 3.0 technology.
DLSS 3.0 will reportedly "work in any game with TAA" but it will require a Game Ready driver to do so, meaning developers will have to do some "specific programming per game to get it to work, but it should be easier than before".