AMD was the first to kinda strike 8K video editing but couldn't quite get there, something we exposed during SIGGRAPH 2017 in the days after the Radeon RX Vega unveiling in LA. But now, NVIDIA is handling real-time 8K video editing on a single Quadro RTX 6000 graphics card with some truly impressive results.
During an event in the US recently, RED Digital Cinema announced a partnership with NVIDIA and announced NVIDIA CUDA-accelerated REDCODE RAW decode SDK that provides software developers and studios new ways to work with 8K video content. RED and NVIDIA announced the news in front of some of the largest names in the industry including Adobe, Colorfront, HP, and others.
RED Digital Cinema president Jarred Land said: "Our mission is to bring cinema-grade images and performance to content creators everywhere. RED, NVIDIA and our industry partners are leveling the playing field, making the technology for high-resolution processing and image quality accessible to everyone".
8K video isn't mainstream right now but there are leaps and bounds advancements being made with 7680 x 4320 displays and TVs, as well as content being shot in 8K or above. Directors and content creators shooting in 8K doesn't mean you need to have 8K monitors or TVs to enjoy their super high-res content, as it provides much more flexibility when it comes to stabilizing, panning, cropping, or zooming into a shot without losing too much image quality. You can shoot in 8K and zoom into something close and maintain 4K resolution with incredible image quality.
AMD announced the world's first 7nm GPU just last month during its Next Horizon event in San Francisco, where it also unveiled the next-gen EPYC 'Rome' CPU with 64C/128T which will also be made on 7nm. But now... a new trademark for Radeon Vega II has been filed by AMD, something you can scope out below.
VideoCardz has noticed that AMD registered the Radeon Vega II trademark two weeks ago now, which should see the company using this slick new logo for the upcoming 7nm Vega 20 GPU that will power the next-gen Radeon Instinct graphics cards.
The new Radeon Instinct MI60 is coming soon, built on the 7nm node and rocking a huge 32GB of HBM2 with built-in ECC and 1TB/sec of memory bandwidth. The new Radeon Instinct MI60 will also be the first to be on PCIe 4.0, offering 64GB/sec of bi-direction CPU-to-GPU bandwidth.
Intel has just peeled back the veil of its future generation CPU architecture and plans for the next couple of years, but one of the areas the company touched on during its Architecture Day was its new scalable graphics architecture known as Xe.
Between now and then we've got Gen 9 graphics, Gen 11 is around the corner and then in 2020 we'll begin to see Xe. The new Xe scalable graphics architecture will be split into four markets: integrated/entry-level markets, mid-range, enthusiast and datacenter/AI. The one I'm most interested in here would have to be the mid-range and enthusiast Xe products, as they'll be the ones that will compete with NVIDIA and AMD in 2020 and beyond.
Kicking things off we have Gen 11 graphics detailed, with Intel teasing continued efficiency and improved performance. There's also advanced 3D, media, and display capabilities with Gen 11, as well as better gaming experiences. I need this in my hands before I sign off on better gaming experiences, but Intel has been continuing to make improvements in gaming performance over the years and things are really kicking into another gear now that most of RTG's best talents work for Intel now.
ZOTAC has done an accidental oops that was quickly removed, teasing for a moment a purported GeForce GTX 1070 that features GDDR5X memory like the bigger brother in the GTX 1080.
The leaked model continues to pack NVIDIA's GP104 GPU which features 1920 CUDA cores, but the big change here is the 8GB of GDDR5X memory. VideoCardz has noticed that the GDDR5X-based GTX 1070 from ZOTAC has a TDP of 250W, which means this is just a GTX 1080 with artificially disabled CUDA cores. This would be strange, but NVIDIA could have an excess of GTX 1080s that they want to get rid of, so they could disable CUDA cores and release a GeForce GTX 1070 with GDDR5X.
NVIDIA could also have a crap load of GDDR5X memory left that it can't use, since GDDR6 is the way forward with the GeForce RTX series graphics cards. Whatever is happening, we could expect a GDDR5X-based GeForce GTX 1070 to be unveiled in the very near future.
If there was a custom GeForce RTX 2080 Ti graphics card that everyone is waiting for, it would be the insane EVGA GeForce RTX 2080 Ti Kingpin, which Kingpin himself has teased recently on Instagram.
The post teases the new EVGA GeForce RTX 2080 Ti Kingpin graphics card, which will be their balls-to-the-wall custom card that are designed for extreme overclocking and LN2 cooling. These cards will rock an LN2 block that looks just as slick as the card itself. We don't know much more about them, other than the very first picture that was approved by Vince 'Kingpin' Lucido himself.
So, I've been in Hawaii for Qualcomm's annual Snapdragon Summit and saw these rumors break while I was enjoying the lifestyle that Maui has to offer, but wanted to wait until I was home and in front of my workstation to get all of my thoughts ready for this rumor.
A few days ago various sources including AdoredTV and Chiphell reported that AMD's first Navi GPU will be the Navi 10 and that it will have the performance of NVIDIA's latest GeForce RTX 2080. This seems like a huge stretch, as I've had personal sources tell me earlier this year that Navi will be "just as bad" as Vega but weren't specific in how bad it would be. I took it as the performance will be similar to Radeon RX Vega 64, but now we're hearing about the price and performance that will be "surprising".
It was only back in April that rumors floated around that Navi would offer GeForce GTX 1080 performance for $250... and now the latest rumors are saying that it will offer GeForce RTX 2080 performance for $249. This rumor corroborates the previous rumors, but they're just that... rumors. The rumors state that the 7nm node will help AMD offer some out-of-this-world power efficiency, but even if we're looking at GTX/RTX 2080 performance... it won't be here until the second half of 2019, so we're at least 6-7 months a way from its release.
NVIDIA already has a slew of Turing-based graphics cards on the market in the form of GeForce RTX and Quadro RTX cards, but now it seems it's nearly time for the TITAN RTX to shine.
In an "accidental" leak Linus from LinusTechTips showed off the TITAN RTX box during the recent WAN show, as well as The Slow Mo Guys showing off the card itself with "TITAN" branding on it in place of the "GeForce RTX" branding we've seen on the RTX 2080 Ti, RTX 2080 and RTX 2070 graphics cards.
We don't know anything more than this, but this means that we can expect TITAN RTX graphics cards to be seeded out to reviewers in the coming weeks, especially if cards and boxes are in the hands of influencers now.
If you own one of NVIDIA's new GeForce RTX graphics cards there's not much you can do with it with real-time ray tracing right now apart from Battlefield V (if you can even get it to work with RTX that is), but that'll soon change with 3DMark's new ray tracing benchmark.
The new 3DMark Real-Time Ray Tracing Benchmark Port Royal will be teased ahead of its January 2019 launch at the GALAX GOC Grand Final overcloccking event in Ho Chi Minh City, Vietnam on December 8. What makes this special is that UL Benchmarks will be the first to market with a dedicated real-time ray tracing benchmark for gamers, where they can (get this) test real-time ray tracing performance of any graphics card that supports Microsoft's DirectX Raytracing standard, DXR.
UL Benchmarks explains: "Real-time ray tracing promises to bring new levels of realism to in-game graphics. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques. As well as benchmarking performance, 3DMark Port Royal is a realistic and practical example of what to expect from ray tracing in upcoming games-ray tracing effects running in real-time at reasonable frame rates at 2560 x 1440 resolution".
Intel has more hype around its upcoming discrete GPU codenamed 'Arctic Sound' with fresh rumors hot off the press from DigiTimes saying that Intel will be hosting a conference in December to unveil its new GPU. Yeah, well, that's not happening.
I reached out to some industry sources to poke around and see what's going on, and it seems it will not be happening next month at all. Intel has previously said it will be launching its discrete GPU in 2020, so why would they be ready for a full detailing of the Arctic Sound architecture in late-2018? Previous rumors teased a reveal at CES 2019 but seemed to have fizzed out, as Intel is sticking to its 2020 release window.
Intel has secured most of the talent from AMD's own Radeon Technologies Group in the last year or so, with GPU architect Raja Koduri joining Intel as well as Radeon marketing legend Chris Hook just to name a few. These talented individuals have vast resources at their hands under Intel compared to the playing cards they had at AMD to fling around the room.
NVIDIA lost a gigantic $23 billion in market cap in a single 24-hour period on Friday, with NVIDIA stock peaking on October 1 at $289.36 and at the time of writing was down to a much weaker $144.70.
Why? Well, there are many reasons - with some claiming it was the cryptocurrency mining bubble popping, but it seems the various issues with the newly launched Turing-based GeForce RTX series being center focus for NVIDIA right now. NVIDIA stock dropped heavily after the company posted their Q3 2018 earnings report which saw the company missing revenue estimates.
NVIDIA shares reached a 52-week low a couple of days ago when they hit $161.61 but the continued spiral down to $144.70 is worrying, with this now being the worst 24-hour percentage drop in 10 years. The last time this happened was back in July 2008 when NVIDIA shares dropped 30% over disappointing sales forecasts. Jensen Huang, CEO and founder of NVIDIA told MarketWatch in an interview on Thursday afternoon that this was all over a "crypto hangover", where Huang explained: "The crypto hangover lasted longer than we expected and we were surprised by that, but it will pass".