NVIDIA has 76% of GPU market share, leaving AMD with just 24%

Anthony Garreffa | Video Cards & GPUs | Feb 25, 2015 4:53 AM CST

According to the latest report from Jon Peddie Research (JPR), NVIDIA is dominating the GPU market share game against AMD. JPR's data for Q3 2014 has NVIDIA securing a huge 76% of the GPU market share, leaving AMD with just 24%. Matrox and S3 are now out of the game, with Matrox losing its small 0.10% market share to NVIDIA.

JPR's estimated graphics add-in-board (AIB) shipments and suppliers' market share for the quarter tracks add-in graphics boards, which feature discrete GPUs. These AIBs are used in various devices, such as desktop PCs, workstations, servers, and other devices "such as scientific instruments". JPR's report has found that AIB shipments have decreased by 0.68% from the previous quarter with the total AIB shipments decreasing over the quarter to 12.4 million units.

AMD's quarter-to-quarter total desktop AIB unit shipment has decreased 16%, while NVIDIA's quarter-to-quarter unit shipments increased by 5.5%.

Continue reading: NVIDIA has 76% of GPU market share, leaving AMD with just 24% (full post)

Unreal Engine updated with incredibly 'realistic foliage lighting'

Anthony Garreffa | Gaming | Feb 25, 2015 1:37 AM CST

Epic Games has released Unreal Engine 4.7, an update that it calls "our biggest yet" that has "the power to render huge numbers of instanced objects in large worlds, beautiful rendering of leaves and foliage, HDR texture support, in-editor animation editing, along with hundreds of optimizations and improvements". It was only yesterday that we teased what a fan-made video using Unreal Engine 4 was capable of.

One of the stand out features with Unreal Engine 4.7 is the addition of the new Foliage Shading Model, which allows for light to be transmitted through grass, leaves, paper, and other materials. The result, is super realistic lighting onto foliage, as you can see in the above shot. Epic Games explains that "diffuse lighting on the opposite side of the surface becomes transmissive lighting on the side being shaded". The latest update to Epic Games' graphics engine has its foliage system updated, which is now optimized for "huge, open environments".

There are various other new tweaks and additions baked into Unreal Engine 4.7, with another notable feature being HTML5 and WebGL support (for Windows only). This means that Unreal Engine 4.7 users can package and run their games in their Web browser using binary tools that you get through the Launcher.

Continue reading: Unreal Engine updated with incredibly 'realistic foliage lighting' (full post)

The two most vulnerable operating systems both belong to Apple

Anthony Garreffa | Software & Apps | Feb 25, 2015 12:40 AM CST

When people think of a vulnerable operating system, the first thing that comes to mind is, well, Windows. But, according to a new report from GFI, that's not the case. The most vulnerable operating system in the world is actually Apple's MacOS X, followed by Apple's iOS.

GFI's report sees Apple taking the top two spots when it comes to OS vulnerabilities, with OS X having 147 vulnerabilities, and iOS with 127. Third position goes to Linux Kernel with 119 vulnerabilities, and Windows Server 2008 in fourth position with 38. Windows 7 funnily enough, comes in at fifth place with just 38 vulnerabilities, and comparing this to the huge 147 holes found in MacOS X, this should wake people up.

Over the course of 2014, there were 7038 new security vulnerabilities, up from the 4794 found in 2013. Out of those 7038 vulnerabilities, just 24% of them were deemed 'high risk'. GDI's Christian Florian explains: "2014 was a tough year for Linux users from a security point of view, coupled with the fact that some of the most important security issues of the year were reported for applications that usually run on Linux systems. Heartbleed, for example, is a critical security vulnerability detected in OpenSSL while Shellshock is a vulnerability that affects GNU Bash".

Continue reading: The two most vulnerable operating systems both belong to Apple (full post)

NVIDIA CEO on GTX 970 VRAM issues: 'we'll do a better job next time'

NVIDIA's CEO and founder Jen-Hsun Huang has written on the company's official blog addressing the issue of the GeForce GTX 970 and its 4GB of VRAM. Huang says early on in the blog post: "We invented a new memory architecture in Maxwell. This new capability was created so that reduced-configurations of Maxwell can have a larger framebuffer - i.e., so that GTX 970 is not limited to 3GB, and can have an additional 1GB".

He adds that the GTX 970 is a 4GB card, and that the upper 512MB of its 4GB of frame buffer is "segmented and has reduced bandwidth". Huang elaborates, saying "This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment". But, he acknowledges that this wasn't all good news, as the company "failed to communicate this internally to our marketing team, and externally to reviewers at launch".

"Instead of being excited that we invented a way to increase memory of the GTX 970 from 3GB to 4GB, some were disappointed that we didn't better describe the segmented nature of the architecture for that last 1GB of memory", explaining the 4GB of VRAM issue on the GTX 970 in more detail, "This is understandable. But, let me be clear: Our only intention was to create the best GPU for you. We wanted GTX 970 to have 4GB of memory, as games are using more memory than ever". Huang added: "The 4GB of memory on GTX 970 is used and useful to achieve the performance you are enjoying. And as ever, our engineers will continue to enhance game performance that you can regularly download using GeForce Experience".

Continue reading: NVIDIA CEO on GTX 970 VRAM issues: 'we'll do a better job next time' (full post)

Valve has 'prominent hardware manufacturer' to make SteamVR headset

It wasn't even 24 hours ago that we were writing about Valve showing off its SteamVR headset at GDC 2015 next week, but VRFocus has heard "from a reliable source close to the project" that the HMD itself will be made by a third-party company.

Valve has reportedly secured a "prominent hardware manufacturer" to make the headset, but now the speculation will begin as to who this hardware manufacturer is. In order to fight against the likes of Oculus VR, Sony and other VR headset makers, Valve can't do this on the cheap. Maybe Valve is working with NVIDIA on the SteamVR headset? Now that should start an exciting discussion.

Continue reading: Valve has 'prominent hardware manufacturer' to make SteamVR headset (full post)

Intel expects 10nm by late 2017, with silicon being abandoned at 7nm

Anthony Garreffa | CPU, APU & Chipsets | Feb 24, 2015 10:37 PM CST

Intel will be providing more details on its upcoming 10nm manufacturing process this week at the 2015 International Solid-State Circuits Conference (ISSCC), and how its new research will continue pounding on the door of Moore's law when it hits 7nm, and beyond.

The chipmaker expects to provide the first 10nm-based processors late 2016 or early 2017, as the company is hoping to dodge the delay train it hit with Broadwell at 14nm. Before 10nm is even here, Intel is teasing 7nm, saying that it will need to use new materials in order to build it. This means that 10nm will be the last product Intel builds using silicon, with Intel eyeing down a replacement for silicon, such as III-V semiconductor, such as indium gallium arsenide (InGaAs).

Then we have even more interesting points of Intel's shift to 7nm, which could see the company using new types of packaging. This includes 2.5D, which is something AMD is using on its upcoming Radeon R9 390X which uses HBM memory. 2.5D has separate dies which are placed side by side on an interposer. Intel would also be looking at 3D, where each die is stacked directly on top of one another. When it comes to 10nm, Intel is hoping to continue pushing Moore's law against the wall, all while reducing the price per transistor. 7nm is going to be a very exciting milestone, as it will shift away from silicon that has been used for decades now. Imagine the possibilities of a 3D stack of 7nm dies... that should have any enthusiast begging for more.

Continue reading: Intel expects 10nm by late 2017, with silicon being abandoned at 7nm (full post)

Apple could end up tracking your iPhone, even if it was turned off

I'm sure the NSA didn't have to twist Apple's arm too much to get this done, but Apple has patented a system that tracks iPhones even when they have been switched off. Apple's patent would see the iPhone prompting a user to enter their security code to formally shut it down.

If the code isn't typed in, or it is typed incorrectly a number of times, the phone will appear off. But this isn't the case; the iPhone is still "on" but it appears to be off to the user. While this sounds like a great idea for security, it does mean that Apple could track your iPhone even if you thought it was "off". You would be none the wiser.

This shouldn't come as a surprise to you, as it can be done on Android smartphones right now. This can be tested by having an Android-powered smartphone far away from a cellular tower, and then keep an eye on the battery usage on your smartphone. Researchers have used this method to track an Android smartphone by making a malicious application that didn't have access to Wi-Fi or GPS, and only kept an eye on the power consumption of the phone.

Continue reading: Apple could end up tracking your iPhone, even if it was turned off (full post)

AMD Radeon R9 390X limited to 4GB of VRAM because of HBM limitations?

With the release of AMD's Radeon 300 series right around the corner, and the tease of its upcoming Fiji-based Radeon R9 390X flagship video card, it's time to start speculating on what we can expect in regards to VRAM on the new GPUs.

We have heard that AMD will be using High Bandwidth Memory, or HBM, on the flagship R9 390X. A report over at Fudzilla points out that AMD is using something called 2.5D-IC silicon interposer, which will see "two separate chips on the same silicon interposer and package substrate". AMD is baking this onto a PCB on the 28nm process, but there will be two products on offer. One without HBM, and the other with HBM.

HBM 1.0 is currently limited to 1GB per stack configured as 4 x 2Gb layers for a total of 4GB of VRAM, which should raise some very serious questions. Throwing to the side memory bandwidth and the node AMD chooses to use (with all signs pointing to 28nm), a limit of 4GB of VRAM could hurt the company with the first new GPU it has released in over 18 months. Considering the issues NVIDIA has been going through with its GTX 970 and "4GB" of VRAM argument, AMD have the opportunity to really drive home the VRAM argument.

Continue reading: AMD Radeon R9 390X limited to 4GB of VRAM because of HBM limitations? (full post)

DirectX 12 rumored to allow GeForce and Radeon GPUs to work together

Anthony Garreffa | Video Cards & GPUs | Feb 24, 2015 8:00 PM CST

Tom's Hardware has quite the exclusive report, where they're saying that they have a "source with knowledge" on the matter of DirectX 12, that will see that the new API will combine the powers of competing GPUs. In order words, an NVIDIA GeForce GPU will work together in a multi-GPU set up with an AMD Radeon card.

This is something DirectX 12 has on its side with its Explicit Asynchronous Multi-GPU capabilities, which will throw all of the various graphics resources in a system, and into a single "bucket". From there, the game developers will have to work out where the workload will be split, which could see different hardware being used in specific tasks.

One of the major points of this new multi-GPU technology is that multi-GPU configurations will no longer have to mirror their frame buffers, or VRAM. In previous APIs, right up to DX11, you needed two cards of identical VRAM amounts to work in tandem, but only one lot of VRAM is utilized, it's not combined. This is a limitation of rendering an alternate frame (AFR), but DX12 is removing the 4 + 4 = 4 limitation of AFR, replacing it with a new frame method called SFR, or Split Frame Rendering.

Continue reading: DirectX 12 rumored to allow GeForce and Radeon GPUs to work together (full post)

Schwarzenegger will be back for 'Terminator 6'

Ben Gourlay | Celebrities & Entertainment | Feb 24, 2015 7:50 PM CST

The next installment in the 'Terminator' saga 'Genisys' is still a few months away, but Paramount Pictures is moving quickly to produce a new trilogy of films in the long running franchise before the rights revert to series creator James Cameron in 2018, where it will likely be retired forever.

TheArnoldFans asked star Arnold Schwarzenegger if he would return for the next film; the as yet untitled 'Terminator 6' to which replied "Yes, of course, next year". It isn't currently known if he will return for the seventh and final film, although I'd be willing to put money on it - even if he will be 70 years old by the time it comes around.

'Terminator: Genisys' hits screens worldwide on July 1st and for the first time in 3D, helmed by 'Thor: The Dark World' director Alan Taylor.

Continue reading: Schwarzenegger will be back for 'Terminator 6' (full post)