CPU, APU & Chipsets - Page 173

All the latest CPU and chipset news, with everything related to Intel, AMD, ARM, and Qualcomm processors & plenty more - Page 173.

Follow TweakTown on Google News

Intel unleashes its new 44-threaded Xeon CPU, supports 385GB of DDR4

Anthony Garreffa | Mar 31, 2016 8:28 PM CDT

Intel has just unleashed its new Broadwell-EP family of processors, starting with the huge Xeon E5-2600 V4 which features a huge 22 CPU cores. Thanks to Hyper-Threading technology, we have a total of 44 threads of CPU power, which is simply insane for the prosumer market - especially those who work in video editing.

The new Intel Xeon E5-2600 V4 hasn't been completely detailed by the company, but the enthusiast part will be the Xeon E5-2699 V4 which packs a base clock speed of 2.2GHz, 55MB of cache, and a pretty tame 145W TDP. What will this 44-threaded processor set you back? A hefty $4115, which works out to $187 per CPU core. If we consider the 8-core/16-threaded Core i7-5960X costs $1059 (which works out to $132 per CPU core) then the new Xeon E5-2699 V4 isn't too badly priced at all.

The new Broadwell-EP powered Xeon processors can take DDR4-2400, with up to 12 DIMMS per CPU socket. If you're using registered DIMMs, you can cram in up to 385GB of RAM per CPU, using 32GB DIMMs of DDR4. To put it simply: I want one, well - probably two.

Continue reading: Intel unleashes its new 44-threaded Xeon CPU, supports 385GB of DDR4 (full post)

IBM is making better, smarter AI through new processor technology

Jeff Williams | Mar 28, 2016 5:12 PM CDT

IBM's Watson and other AI systems like it are very impressive showcases of the kind of learning that a well developed deep neural network is capable of. Even Tay, the rogue Microsoft millennial AI that favors the PS4 over the Xbox One and seems to dismiss the Holocaust, is a feat of software engineering and learning that's pretty fantastic. But compared to the human mind, it still takes these machines, which rely on GPU's CPU's and at times even specialized ASICS to process such enormous amounts of data in parallel, far longer to learn even simple tasks. And it can be energy intensive, far more so than the human brain. But IBM thinks, and knows, that there's a better way.

IBM and the crew at the T.J. Watson Research Center want to use a specialized processor called the resistive processing unit, which is a marriage of CPU with non-volatile memory, that could exponentially speed up machine learning. It does this, essentially, by allowing the different parts to communicate at rate that's at least 27x faster than a traditional DNN setup. Learning involves moving forward and backward, analyzing data that's stored in memory, making that a bottleneck in this application. It could then massively increase the ability of these networks to learn, making speech recognition and similar AI functions in what could almost be near-realtime.

This type of processor is only theoretical at the moment though solving this obstacle in even an incremental fashion could bring about a sizable speed increase. The researchers even mention the ability to see an advantage of up to 30,000 times should they design and implement a device made specifically for their own DNN software. "We propose and analyze a concept of Resistive Processing Unit (RPU) devices that can simultaneously store and process weights and are potentially scalable to billions of nodes with foundry CMOS technologies. Our estimates indicate that acceleration factors close to 30,000 are achievable on a single chip with realistic power and area constraints,"

Continue reading: IBM is making better, smarter AI through new processor technology (full post)

Intel's clock is broken, company will lengthen use of its 14nm tech

Anthony Garreffa | Mar 23, 2016 2:25 AM CDT

It's an end of an era: Intel has confirmed through their latest K-10 filing that its infamous 'tick-tock' process development cycle is dead.

Instead of having two processor families on each die shrink, Intel will be using three or more over the coming years. The K-10 filing states that Intel will "expect to lengthen the amount of time we will utilize our 14 [nanometer] and our next-generation 10 [nanometer] process technologies".

Intel will continue to release new products each year, but there will be a tighter control over architecture optimization, as the development of process technology slows. So... what does this mean? It confirms that Intel's next-gen 'Kaby Lake' platform will be made on 14nm. It also confirms that the release window for 10nm from Intel will be 2017 at the earliest, and 7nm - well, that's 2019-2020 or beyond now.

Continue reading: Intel's clock is broken, company will lengthen use of its 14nm tech (full post)

AMD's upcoming Bristol Ridge APU should be faster than an Xbox One

Anthony Garreffa | Feb 27, 2016 8:47 PM CST

One of the fastest APUs from the Bristol Ridge family will be just as fast as the Xbox One, according to a new rumor from Bitsnchips.

AMD's new Bristol Ridge family will feature a powerful APU that will be quite powerful, easily taking on the consoles in providing a 1080p gaming experience, in a small package and price. AMD is expected to launch its new Bristol Ridge family at Computex, so we should expect more details in June.

As for the rumor, the flagship Bristol Ridge-based APU would feature 16 compute units that are based on the GCN 1.3 architecture. The 16 compute units would include 1024 stream processors, which is the same SP count as AMD's Radeon HD 7850. The HD 7850 launched in 2012, and was a great budget/mid-range GPU - if we see this performance in an APU, things could get very exciting for AMD.

Continue reading: AMD's upcoming Bristol Ridge APU should be faster than an Xbox One (full post)

Marvell significantly expands ARMADA SoC open-source OS compatibility

Jeff Williams | Feb 25, 2016 3:02 PM CST

Marvell just expanded their line of ARMADA SoC ecosystem, that are frequently used in NAS and other networking devices, to include native support for open-source software platforms like OpenWRT and openSUSE.

Before now Marvell didn't officially support any other software than that which was initially installed on their platforms. Adding support in the kernel of the various open-source OS's required a lot of time from volunteers to make it work properly. Because of that, support was always a bit precarious, and it could take quite awhile for new devices to be added to the compatibility lists.

Now, however, their 64-bit ARMv8 powered ARMADA 3700 Cortex-A53 device family and ARMADA 7K and ARMADA 8K Cortex-A72 device families are getting full-fledged support for the Linux kernel as well as U-Boot support. That means that it'll be compatible with a much wider range of OS's, anything that has ARM support baked in can run on their chips, essentially.

Continue reading: Marvell significantly expands ARMADA SoC open-source OS compatibility (full post)

AMD expands low-power embedded G-Series SoC's Excavator and GCN

Jeff Williams | Feb 23, 2016 1:05 PM CST

AMD is doubling down on their embedded G-Series SoC's, tiny APU's designed with industrial applications in mind. The newest members of the family are the 3rd generation of their embedded platform, putting Excavator based cores in the high-end and Jaguar-based cores in the low-end, and they're completely pin compatible with earlier iterations of the G series (and R for the high-end chips being announced), allowing easier upgrading for customers on older hardware.

The new G-Series LX fills in the low-end with two Jaguar cores with GCN and AMD's ARM-based security co-processor. It operates at a very low 6-15W and can withstand far harsher conditions than the typical desktop processor. Available in March, this new SoC is designed for the point-of-sale market and even arcade gaming market. With their ARM co-processor it might even make a good companion to industrial automation in the connected age.

The high-end G-Series marries two Excavator cores with four GCN compute units that allow for a much better compute load if OpenCL is used. AMD is targeting similar industries that need higher compute load at a low 6-15W TDP. They expect that it'll be a good fit for the digital signage and even set-top boxes, despite that arena being dominated by ARM. This too has an integrated ARM-based security co-processor.This new processor is actually pin compatible with the higher-end R series of SoC's letting customers choose between what sort of power-envelope they want.

Continue reading: AMD expands low-power embedded G-Series SoC's Excavator and GCN (full post)

Analogix's new SlimPort ANX7688 chip does 4K 60FPS on phones, tablets

Anthony Garreffa | Feb 22, 2016 6:06 PM CST

MWC 2016 - Analogix is a company that never ceases to amaze me, with the announcement of their new SlimPort ANX7688 single-chip mobile transmitter. What makes the new SlimPort ANX7688 so special? Well now.

The SlimPort ANX7688 is capable of driving 4K 60FPS (4096x2160) or 1920x1080 at 120FPS - both from a smartphone or a tablet with full USB-C capabilities. Analogix is forward-thinking with its SlimPort ANX7688 with support and technological capabilities of driving AR and VR technologies, which require much more performance for video processing, and more.

Analogix has created the SlimPort ANX7688 with Qualcomm-based USB-C smartphones and tablets in mind, as it converts the HDMI and USB interfaces to DisplayPort. This is done with Analogix integrating a converter bridge, a high-speed mux, USB-PD support for fast-charging, and the latest HDCP 2.2 content protection. Andre Bouwer, VP of Marketing for Analogix, explains: "ANX7688 puts Analogix years ahead of the competition to enable DisplayPort over USB-C capability on smartphones and tablets".

Continue reading: Analogix's new SlimPort ANX7688 chip does 4K 60FPS on phones, tablets (full post)

Intel confirms that its 10nm process is on track, will arrive in 2017

Anthony Garreffa | Feb 21, 2016 7:52 AM CST

There have been rumors of Intel delaying its 10nm technology, but the company has come out and squashed those rumors, in an ad.

Motley Fool spotted the ad, which has since been taken down, which had Intel saying that its 10nm CPU manufacturing technology would begin mass production "approximately two years" from the posting date. Intel said that the advert was wrong, reiterating that its "first 10-nanometer product is planned for the second half of 2017".

Intel should be positioning itself to have 10nm server processors ready for launch in the first half of 2018, with the consumer market to continue making good use of the 14nm CPUs until 10nm supplies become available in larger numbers.

Continue reading: Intel confirms that its 10nm process is on track, will arrive in 2017 (full post)

Intel's new 16-core Xeon D-1587 is a beast, in a small 65W package

Anthony Garreffa | Feb 20, 2016 10:53 PM CST

Intel launched their new Broadwell-based Xeon D processors last week, led by the impressive Xeon D-1587. Intel's new Xeon D family includes a few system-on-chip (SoCs) that are aimed at the microserver and storage markets, thanks to Intel being able to make the new Xeon D family in a low TDP package.

The flagship Xeon D-1587 is a beast in itself, with 16 physical cores with 32 logical threads. It boasts 24MB of cache, and has its 16 cores clocked at 1.7GHz, best of all - it rocks this all at 65W. Impressive, eh? The next one down is the Xeon D-1577 which has the same specifications, 16 cores and the same 24MB cache, but a 1.3GHz clock speed and only 45W TDP. Last of all, is the Xeon D-1571 which features the same 16-core goodness, falling in line with the Xeon E5 V4 processors powered by Broadwell-EP. But, it gets a clock speed of 1.3GHz, 24MB cache, and same 45W TDP.

Intel's new Broadwell-powered Xeon D processors arrive in BGA packaging, so you'll need to purchase them directly from Intel's AIB partners, such as SuperMicro, GIGABYTE, or others. As for pricing, the Xeon D-1571 is priced at $1222, so expect the Xeon D-1587 to be priced a little higher than that. But the next question is: what about performance? Yeah well, Intel has you covered there, too. Before that, we have support for dual-channel DDR4 2133MHz, or DDR3L at 1600MHz. The system would support up to 128GB of RDIMM, 64GB of UDIMM/SO-DIMM in ECC or non-ECC. The SoC features a total of 24 x PCIe 3.0 lanes and 8 x PCIe 2.0 lanes, while we have dual 10GbE for networking, 4 x USB 3.0 and 4 x USB 2.0 ports, with 6 x SATA 6Gbps ports for HDD connectivity.

Continue reading: Intel's new 16-core Xeon D-1587 is a beast, in a small 65W package (full post)

AMD has their own recommended CPUs for VR, because Oculus hates AMD

Jeff Williams | Feb 15, 2016 8:02 PM CST

It seems that the Oculus website doesn't quite agree with AMD's current lineup of CPU's, and won't certify you as having a VR ready PC if you happen to have one, either. Now, it's well known, and not refuted by any means, that the performance is certainly not on par with Intel's current generation (and last generation) of CPU's, but that doesn't mean that they won't be able to provide a good VR experience, either. So AMD has released their own list of VR ready processors so that AMD owners, and fans, aren't left out.

The list contains CPU's that have been tested to a certain standard of performance, internally, when using VR. Essentially, Vishera is more than capable of handling the complex tasks in VR with higher clock speeds. The FX 8350 all the way to the FX 9590 make the list, as well as the higher clocked six core variants. A note, however, is that AMD has amended their list and take off the APU's and Athlon X4 880 an 870K, though not because they can't do VR, but because they haven't been qualified internally yet.

Going to the Oculus site definitely shows a lack of enthusiasm, perhaps rightfully so, for 32nm technology first introduced in 2012. The platform may be somewhat old, but it isn't lacking in it's ability to provide a good experience. They'll happily push Intel's products, of course, as well as NVIDIA's too. It's somewhat disconcerting for those that have already invested in AMD's parts to not see their processors mentioned anywhere regarding VR. So just remember that just because it isn't on the list, it doesn't mean it can't play VR with decent visuals, just that it probably hasn't been tested quite yet.

Continue reading: AMD has their own recommended CPUs for VR, because Oculus hates AMD (full post)

Newsletter Subscription