EVGA continues to lead the pack with overclocking products, and underlines that fact with the release of their new EPOWER V card. EVGA's new standalone VRM board provides additional power to graphics cards and motherboards, allowing you to break out of the voltage shackles on even the best graphics cards and motherboards on the market.
The EPOWER V board features two fully-independent voltage outputs, as well as a built-in EVBot MKII, which lets you adjust voltage control on-the-fly. The EPOWER V board is powered by 3 x 6-pin PCIe power connectors, while input is provided through a 12+2 phase design that provides a massive injection of VCORE and VMEM into your graphics card, letting it break through those ridiciulously low voltages, and into an entire new level.
EVGA even provides USB 3.1 Type-C and softwre controls, which lets you connect the USB cable up to your PC and use software to control the EPOWER V board. A fully detailed rundown on the amazing new EPOWER V board can be found here.
AMD might have just launched their new Vega GPU architecture with a slew of Vega-based products (Radeon Vega, Radeon RX Vega, Radeon Pro WX, and Radeon Instinct) but the real king is NVIDIA's now months-old Volta GPU architecture.
We don't hear much about NVIDIA's Volta GPU architecture because it's still a while out from finding its way into consumer GeForce graphics cards, but the supercomputer/AI/deep learning markets are now receiving their new Volta-based Tesla V100 accelerators which means... BENCHMARK TIME!
First off, let's look at the difference between the previous-gen Pascal-based Tesla P100 and the new Volta-based Tesla V100. Starting off with 12x more deep learning training performance, with 10 TFLOPs on P100 up to a freakin' is-it-real 120 TFLOPs of 'DL training' on V100.
AMD could be in trouble with RTG boss Raja Koduri going on a sabbatical as of yesterday, but the news of NVIDIA working on a new GeForce GTX 1070 Ti graphics card is even bigger news.
NVIDIA already has a great mid-range borderline high-end card with the current GeForce GTX 1070, but a Ti variant in the GTX 1070 Ti could really rock AMD's world. NVIDIA will reportedly be using 2304 CUDA cores on the new GTX 1070 Ti
- GTX 1070: 1920 CUDA cores
- GTX 1070 Ti: 2304 CUDA cores
- GTX 1080: 2560 CUDA cores
- GTX 1080 Ti: 3584 CUDA cores
- TITAN Xp: 3840 CUDA cores
As VideoCardz says this could just be a typo as it shows the ASUS GTX 1070 Ti STRIX O8G, but if it's real it could get very thick for AMD, very fast. We will report more as it breaks.
Update: I've since confirmed this with AMD and they will be providing me with a response shortly.
AMD launched their next-gen Vega GPU architecture just a few weeks ago and it has been a sea of controversy ever since, but now we're hearing that Radeon Technologies Group boss Raja Koduri is going on a break until the start of 2018, with AMD CEO Lisa Su stepping into Koduri's shoes for the next few months.
Radeon RX Vega launched in two varieties: Radeon RX Vega 56 and Radeon RX Vega 64, offering GeForce GTX 1070 and GTX 1080 levels of performance. Both cards were meant to represent a return to form for Radeon, especially as the GPU division split off into RTG in the end of 2015.
AMD was hitting quite a few home runs with the Radeon RX 400 series, their politically-charged 'VR isn't just for the 1%' marketing, and Polaris in general. The Radeon RX 500 series really wasn't that great and more of a rebrand and tweak of the RX 400 series, but Vega was meant to be a CHAMPION. AMD had planned Radeon RX Vega for earlier this year, but ran into multiple problems with HBM2 yields, and then Vega itself is hot, power hungry, and performs like NVIDIA's cards from 18 months ago. There's not much to work with there.
AMD might have tripped over releasing Radeon RX Vega in its current form, but the upcoming Vega 20 chip should be a monster. According to Digitimes, orders have already been placed for Vega 11 (the cut down version of the current flock of Vega cards), while Vega 20 is right around the corner.
Digitimes reports: "Packaging specialist SPIL, which has already obtained orders for AMD's Vega 10-series chips, will continue to hold the majority of backend orders for the Vega 11 series, the sources noted". Globalfoundries will make AMD's new Vega 11 chips on the 14nm LPP process, while SPIL will take care of the packaging and integration of the GPU and HBM2 dies.
But we're not here for Vega 11 news now, are we? Vega 20 is AMD's next-gen high-end GPU that will be built to handle all tasks between 4K and beyond gaming, AI and supercomputing. The big news here is that AMD will reportedly be tapping TSMC for their new Vega 20 creation, as it will be baked onto TSMC's surely impressive 7nm process.
Vega 20 will be thrown into volume production in 2018, and should feature up to 32GB of HBM2 with up to 1TB/sec+ of memory bandwidth on the full, proper 4096-bit memory bus (Vega 10 is on a gimped 2048-bit memory bus with the 8GB of HBM2 held back at 484GB/sec).
NVIDIA unleashed its next-gen DGX-1V system earlier this year, promising a Q3 2017 launch, and have followed through with that launch now shipping the first Volta-based DGX-1V systems.
The first Volta-based DGX-1V system was shipped to MGH & BWH Center for Clinical Data Science (CCDS), which is a "Massachusetts-based research group focusing on AI and machine learning applications in healthcare", reports AnandTech. This means CCDS is one of the first research institutes with the new Volta-based AI supercomputer, an upgrade from their Pascal-based DGX Station.
CCDS will be using the AI supercomputers in training deep neural networks that will be evaluating medical images and scans, with AnandTech adding that they'll be "using Massachusetts General Hospital's collection of phenotypic, genetics, and imaging data. In turn, this can assist doctors and medical practitioners in making faster and more accurate diagnoses and treatment plans".
NVIDIA's new DGX-1V features 8 x Tesla V100 accelerators, with 2 x Intel Xeon E5-2698 v4 processors (20C/40T), 512GB DDR4-2133 LRDIMM, and a total of 128GB of HBM2 (8 x 16GB HBM2 per Tesla V100).
Last night I spent most of the full moon passing over me to test out Ethereum mining performance on AMD's best Radeon RX Vega 64 graphics cards, as well as the hugely expensive and amazingly fast TITAN Xp from NVIDIA.
I've got a detailed article coming soon, but I thought I would share preliminary results of these two cards with you now, before the big article goes live.
I used AMD's Radeon RX Vega 64 Liquid Cooled Edition, overclocking the 8GB of HBM2 from its stock 945MHz to 1100MHz, resulting in mining performance that went through the roof.
AMD Radeon RX Vega 64 Liquid Cooled Edition
HBM2 @ 1100MHz - 42.9MH/s sustained ETH mining performance
- -21% power (327W) - 42.9MH/s @ 54C @ 980RPM
- -25% power (303W) - 42.9MH/s @ 46C @ 960RPM
- -28% power (294W) - 42.8MH/s @ 49C @ 965RPM
GIGABYTE has just revealed their new GeForce GTX 1080 Mini-ITX 8G, the smallest GTX 1080 ever made. It is even smaller than ZOTAC's GeForce GTX 1080 Mini, by nearly 50mm.
GIGABYTE's new GeForce GTX 1080 Mini-ITX 8G is a single-fan graphics card with a triple heat pipe cooling solution, and 5+2 phase card that has Gaming Mode GPU clock speeds of up to 1733MHz, or 1771MHz through OC Mode. The 8GB of GDDR5X memory is clocked at 10Gbps, the same as GTX 1080 Founders Edition. There's still a single 8-pin PCIe power connector to power the card, but no TDP numbers released just yet - although, we should expect 180W.
The super-small GTX 1080 is just 169mm in length (versus 211mm on ZOTAC's GTX 1080 Mini) and 131mm high (ZOTAC's card is shorter at 125mm).
There's no details on pricing just yet, but GIGABYTE should have some more details in the very near future.
AMD is experiencing at least $100 premium on top of their new Radeon RX Vega range of graphics cards, and now it seems like NVIDIA will be going through the same thing thanks to an industry-wide shortage of GDDR5 supply.
DigiTimes reports that NVIDIA's entire stack of GeForce GTX 10 series cards, so the GTX 1050, GTX 1060, GTX 1070, GTX 1080, and GTX 1080 Ti (and I'm sure the GTX 1060 9Gbps, and GTX 1080 11Gbps, oh and the TITAN Xp) will all experience price increases in the next couple of weeks. The average price increase is expected to be somewhere in the vicinity of 10%, with SK Hynix and Samsung reportedly cutting their GDDR5 supply for the discrete GPU market.
Effective immediately, both SK Hynix and Samsung have reportedly increased GDDR5 memory by 30.8% which isn't going to be fun leading into the holiday season for PC gamers. NVIDIA won't hurt so much from this as they have the financial backing to take the 10% hit per card, but if AMD Radeon cards using GDDR5 go up (Radeon RX 500 series) then they're going to hurt even more.
Total War: Warhammer II gamers will need a fair amount of PC hardware grunt to run at 4K 60FPS+ but Creative Assembly haven't made the game out of reach for everyone. CA have teased that pre-orders for Warhammer II have beaten all previous records in the series.
THe developer recommends an Intel Core 2 Duo @ 3GHz, 4-5GB RAM, 60GB HDD space, and an NVIDIA GeFOrce GTX 460 with 1GB of RAM, an AMD Radeon HD 5770 with 1GB of RAM, or Intel HD 4000 integrated graphics. This is all for 25-35FPS on campaign 1v1, 20x20 units battle, and 'low' quality graphics @ 1280x720.
If you want 1080p 60FPS+ then you're going to need an Intel Core i7-4790K @ 4GHz, 8GB of RAM, and an NVIDIA GeForce GTX 1070. I'd like to see 4K 60FPS numbers, but given a GTX 1070 is required for just 1080p 60FPS @ Ultra graphics, I suspect we'll need GTX 1080 Ti or even TITAN Xp to reach 4K 60FPS @ Ultra.