Inno3D is jumping right into the cryptocurrency mining product game with the announcement of their new P104-100 Crypto-Mining Accelerator, a new card that is powered by NVIDIA's GP104 GPU and packs 4GB of GDDR5X at 11Gbps.
The new P104-100 Crypto-Mining Accelerator will be available in Inno3D's TWIN X2 edition, and offers 40% more mining power than its predecessor for mining ETH, ZEC, and many other cryptocurrencies. The reason it is so powerful, is that Inno3D have kicked down the VRAM to just 4GB, but amped it up by using GDDR5X with 11Gbps of bandwidth on a 256-bit memory bus that provides 320GB/sec of memory bandwidth.
This means Inno3D's new P104-100 Crypto-Mining Accelerator rocks 35MH/s or more of ETH mining power, 470 Sol/sec of ZEC mining power, and 660H/sec of XMR mining power.
Inno3D haven't provided a price just yet, but they did say that the P104-100 Crypto-Mining Accelerator will be available before the end of December.
NVIDIA has officially unveiled its monstrous new TITAN V graphics card, a new Volta-based graphics card that packs a huge 5120 CUDA cores and 12GB of HBM2 for a massive $2999.
Our friends at GamersNexus have performed an awesome teardown of the card, both on video (embedded above) and in written form. I won't cover the teardown itself as you really should spend the time watching Steve's video on it (his hair is glorious, I know) or alternatively read the full teardown article that covers everything under a microscope in terms of detail.
GN did point out that the card "follows the same screw pattern as all previous NVIDIA Founders Edition cards, including the Titan Xp and GTX 1080, primarily isolating its cooler and shroud into a single, separable unit". As for the build materials of the TITAN V, GamersNexus point out that they "are all the same, assembly is the same, but the underlying GPU, HBM2, VRM, and heat sink are different".
NVIDIA announced its monstrous new TITAN V a few days ago, an AI/deep learning focused graphics card that costs a whopping $2999. In the usual fashion, benchmarks have now been teased against the GeForce GTX 1080 Ti. A perfect comparison considering the GTX 1080 Ti is $699, and the TITAN V is $2999.
The benchmarks aren't official, and were posted by Reddit user 'MrOmgWtfHaxor' who said they are the results of "someone posting benchmark results in the NVIDIA Discord". The comparison was between a GTX 1080 Ti heavily overclocked under LN2 at a we-won't-see-this-in-the-mortal-realm GPU frequency of 2.5GHz while the new TITAN V was clocked modestly at 1.8GHz.
NVIDIA's new TITAN V was also overclocked by 170MHz which resulted in nearly 10% more 3DMark FireStrike scores, from 32,774 to 35,991. Moving onto Unigine's stressful Superposition test, the TITAN V was capable of 5222 in the 8K Optimized test, while it scored 9431 in the 1080p Extreme presetr. An overclocked GTX 1080 Ti by KINGPIN at 2581MHz (!!!) only scored 8642 in the same test, meaning the TITAN V is an utter monster in comparison to the GTX 1080 Ti.
But what about games? Under the highest settings in 1080p, the TITAN V manages around 66FPS average in Rise of the Tomb Raider, 158FPS average in Gears of War 4, and 88FPS in Ashes of the Singularity. This means that the TITAN V is around 25-30% faster than the GTX 1080 Ti in gaming - all run on an Intel Core i7-6700K processor.
NVIDIA has surprised the world with the announcement of their next-gen TITAN V graphics card, a new card that rocks the latest Volta GPU architecture and 12GB of HBM2, culminating in a card that costs a whopping $2999.
The new TITAN V features 21.1 billion transistors, packs 110 TFLOPs of raw horsepower, and has "extreme energy efficiency". NVIDIA explains: "Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links".
NVIDIA's new TITAN V features the GV100 GPU that packs 12GB of HBM2 - the first TITAN card to feature HBM2 memory technology. GV100 features 5120 CUDA cores and 320 TMUs, the same amount of CUDA cores that NVIDIA slapped onto the Tesla V100.
Inside, the new TITAN V features 640 Tensor Cores that are turned for deep learning performance. The GPU itself is clocked at 1200MHz base an up to 1455MHz boost while requiring just an 8+6-pin PCIe power connector setup and 250W TDP. Much less than the power requirements and TDP of AMD's latest Radeon RX Vega graphics cards.
The 12GB of HBM2 has a data rate of 1.7Gbps on a huge 3072-bit memory bus that provides a huge 652.8GB/sec of memory bandwidth.
You can buy the new TITAN V from NVIDIA directly for $2999.
AMD very silently launched their new Radeon RX 560 (a second variant) which saw some Compute Units cut from the GPU, but the company didn't name it any different to the current RX 560 which has more CUs. Yeah, confusing... I know.
GamersNexus were on the ball as always (hey, Steve!) and then had an official statement from AMD that explained what happened, where the company mentioned that its up to the AIB partners to clarify the actual product specifications during marketing of their new Radeon RX 560.
AMD launched the Radeon RX 560 not that long ago and it featured a full 16 CUs that sport 1024 stream processors, which go hand-in-hand with the 4GB of GDDR5 memory on-board. The new Radeon RX 560 on the other hand has 14 CUs and 896 stream processors, but without knowing which one you just purchased - you might buy the slower one, because of crappy marketing and advertising on the boxes.
AMD said in their statement: "It's correct that 14 Compute Unit (896 stream processors) and 16 Compute Unit (1024 stream processor) versions of the Radeon RX 560 are available. We introduced the 14CU version this summer to provide AIBs and the market with more RX 500 series options. It's come to our attention that on certain AIB and etail websites there's no clear delineation between the two variants. We're taking immediate steps to remedy this: we're working with all AIB and channel partners to make sure the product descriptions and names clarify the CU count, so that gamers and consumers know exactly what they're buying. We apologize for the confusion this may have caused".
I do think Steve wrote something very interesting, in that AMD were very quick (see: instant) to push their '4GB means 4GB' back when Robert Hallock was in the Radeon team (he shifted over to the Ryzen team). But I wonder if AMD will laugh at this and say '896 is 1024' or not, or maybe NVIDIA will be up for the troll today.
The hottest rumor of the weekend comes from a post on Reddit that teases the LinkedIn page of an AMD technical engineer saying they're working on a GDDR6 memory controller.
GDDR6 will see memory bandwidth cranked up to 16Gbps, up from the 11Gbps used on the GeForce GTX 1080 Ti with its GDDR5X memory tech. 16Gbps will blow away even HBM2, which is fairly useless for gamers. NVIDIA's next-gen Ampere GPU architecture should feature GDDR6, too.
I think we'll see a next-gen Vega and then Navi using HBM2 and even HBM3 if it arrives in time, or if Navi is delayed more since Raja Koduri has left RTG. Also, I think we'll see AMD use GDDR6 on their mainstream RX 600 series cards, leaving HBM2 on a re-spun Vega architecture. There's no need for HBM2 for gamers right now, no matter what AMD pushed out saying HBM2 was the bees knees.
Games simply don't need HBM2 and from what I've seen, the memory technology holds no performance improvement across the board. For specifically chosen games, maybe... but not overall. NVIDIA smacks AMD around the place with GDDR5 and GDDR5X, without the need for the super-expensive and super-limited supply of HBM2.
NVIDIA has just released its latest GeForce Game Ready 388.43 WHQL drivers, a new set of drivers that provide the best compatibility and performance for all of the latest games, and some improvements to DOOM VFR.
The new drivers include the usual bug fixes, but it seems like the major focus of these drivers is that they provide "the optimal gaming experience for DOOM VFR". You can grab the new GeForce 388.43 WHQL drivers right here.
AMD is teasing its next major graphics driver software update, something the company is calling Radeon Software Adrenalin Edition. The new drivers will be released in December, but we can't say when just yet.
As for the new drivers, AMD says that Radeon Software Adrenalin Edition "is where visible and invisible have been woven together to empower gamers and enrich the visual experience"... or whatever that means. The new drivers should feature OSD performance monitoring which will be a great thing for enthusiasts, and it's something I've wanted for years now.
Once AMD releases their new drivers we'll re-run all of our Radeon RX series graphics cards (RX 480, RX 580, RX Vega 56, RX Vega 64, RX Vega 64 LCE) and see what performance/power/temp differences there are. We shouldn't have too much longer to wait.
The HDMI Forum released their specifications for HDMI 2.1 earlier this year, but have now tightened them and make them official. HDMI 2.1 is a freaking beast, and will power the next generation of monitors.
HDMI 2.1 packs a huge increase in bandwidth with up to 48Gbps on tap, and is backwards compatible with previous-gen HDMI cables. HDMI 2.1 supports an insane 10K resolution which throws the pixels up to a huge 10240 x 4320, up from the 7680 x 4320 that 8K is capable of. HDMI 2.1 supports higher refresh rates as well, with 4K120 and 8K60 support, as well as Dynamic HDR.
The inclusion of support for 10K is interesting, as it's mainly for commercial and speciality use - but now I want to see a 10K UltraWide monitor.
HDMI 2.1 also supports Variable Refresh Rate, which results in smoother performance on-screen as well as Quick Frame Transport (QFT) which reduces latency. There's also Quick Media Switching (QMS) as well as Auto Low Latency Mode (ALLM) which will automatically set the perfect latency for the smoothest experience.
AMD has just released their latest Crimson ReLive Edition 17.11.3 HOTFIX for Radeon RX Vega drivers, which weigh in at 318MB for the Windows 10 drivers and a much bigger 440MB for the Windows 7 version. Download them here.
The new drivers fix an "intermittent crash issue may be experienced on some Radeon RX Vega series graphics products", as well as bunch of other known issues being squashed. This is what you can expect the new drivers to fix:
- Some desktop productivity apps may experience latency when dragging or moving windows.
- Tom Clancy's Rainbow SixÃ,Â® Siege may experience an application hang when breaching walls with grenades or explosives.
- Rise of the Tomb RaiderÃ¢"Â¢ may experience an intermittent application hang during gameplay.
- A random system hang may be experienced after extended periods of use on system configurations using 12 GPU's for compute workloads.
- The GPU Workload feature may cause a system hang when switching to Compute while AMD CrossFire is enabled. A workaround is to disable AMD CrossFire before switching the toggle to Compute workloads.
- Resizing the Radeon Settings window may cause the user interface to stutter or exhibit corruption temporarily.
- Unstable Radeon WattMan profiles may not be restored to default after a system hang.
- OverWatchÃ¢"Â¢ may experience a random or intermittent hang on some system configurations. Disabling Radeon ReLive as a temporary workaround may resolve the issue.