CPU, APU & Chipsets News - Page 149

All the latest CPU and chipset news, with everything related to Intel and AMD processors & plenty more - Page 149.

Follow TweakTown on Google News

AMD teams with Synopsys IP for 14/16nm APU/GPU products, teases 10nm

Anthony Garreffa | Sep 21, 2014 12:47 AM CDT

AMD has announced a new multi-year agreement with Synopsys IP that will see the chipmaker receiving a slew of Synopsys DesignWare intellectual property on its advanced 16/14nm technologies, as well as its upcoming 10nm FinFET technology. AMD will be handing over specific IP and engineering resources to the company. Considering NVIDIA just catapulted it's more-than-impressive GeForce GTX 900 series, there's never been a better time for AMD to partner up with someone who can handle the move to smaller processes.

The agreement sees AMD securing interface, memory compiler, logic library and analog IP from Synopsys, where it will use these technologies to create future generations of its chips on the 14nm and 16nm FinFET manufacturing process, eventually moving onto the 10nm process down the track. Synopsys will reportedly hire around 150 of AMD's IP & R&D engineers and receive access to AMD's leading interface and foundation IP. AMD will be saving money with this deal, but provides some holes in its resources, while Synopsys is only gaining from this deal.

If you've never heard of Synopsys, they are a leading power in silicon-proven IP for advance process technologies, with the company helping chip designers on a broad range of high-end IP for integration into system-on-chips, or SoCs, as well as delivering expert technical support. This power allows companies like AMD to come to them, in order to save money on pumping into their own R&D. But, AMD still packs a punch when it comes to the complex IP used in advanced microprocessors and GPUs. AMD will gain silicon-proven IP for its chips over the coming years, while handing over interface and foundation IP, as well as engineers to Synopsys, something the company explains will give it the ability to "focus its valuable engineering resources on its ongoing product differentiation and IP reuse strategy".

Continue reading: AMD teams with Synopsys IP for 14/16nm APU/GPU products, teases 10nm (full post)

Samsung rumored to be working on its on GPU

Anthony Garreffa | Sep 16, 2014 7:56 AM CDT

We found out not too long ago that NVIDIA was suing Samsung and Qualcomm, without going after any other companies, even if those companies used chips and parts from Samsung and Qualcomm, but now we might have found out why: Samsung is rumored to be working on its own GPU.

The news is coming from Fudzilla, and is just a rumor right now, but the company has been reportedly hiring people from the likes of AMD, NVIDIA and Intel. If Samsung were to be building its own GPU, it would be competing directly against Qualcomm and NVIDIA, with the latter having a very capable SoC with its Tegra K1 processor.

If Samsung did build its own GPU, it would save itself from having to license one from another company, as it would have nearly all of the components it needs for a flagship device built-in-house, from the screen, right down to the GPU.

Continue reading: Samsung rumored to be working on its on GPU (full post)

No new CPU architectures from AMD until at least 2016

Anthony Garreffa | Sep 9, 2014 12:29 AM CDT

Intel has just launched its new high-end Haswell-E platform, but what is AMD doing? Well, according to a recent interview with Bloomberg, AMD won't be releasing a new micro-architecture until 2016, with any CPU or APU products released between now and then based on current architecture.

AMD CEO, Rory Read, talked with Bloomberg, but didn't reveal any information on future microarchitecture, but he did say that the hardware coming out next year will be based on existing architecture, and won't be much better than what AMD has on the market now. Read said: "AMD engineers are now proving they can deliver new designs on time, something that didn't happen in the past."

In 2015, we can expect AMD to release new APUS that will be based on the low-power Puma+ and high-performance Steamroller architecture. Both of these architectures aren't expected to deliver much additional performance, but we should expect lowered power consumption and heat output.

Continue reading: No new CPU architectures from AMD until at least 2016 (full post)

Intel's Core i7-5960X CPU has already been overclocked to 6.2GHz

Anthony Garreffa | Aug 28, 2014 7:39 AM CDT

Intel will be launching its new Haswell-E based Core processors tomorrow, but some leaked benchmarks are already surfacing over at Videocardz and WCCFTech. When it comes to games, the new Core i7-5960X is around 14% faster than its predecessor, the Core i7-4960X.

The new Core i7-5960X is Intel's first 8-core processor for the consumer market, with a stock frequency of 3GHz, and Boost frequency of 3.5GHz. We have 20MB of L3 cache, 140W TDP and support for DDR4 memory. We should expect a price of $999, which isn't too bad for a processor of this calibre.

When it comes to 4K video editing the new Core i7-5960X is around 20% faster than the 4960X, and around 32% faster in 3D rending. 'Thayn3' in the coolaler forums was able to overclock the Core i7-5960X to 4GHz using just 1.2V, but there has been an insane overclock found online, with the new 16-thread CPU clocked up to 6.2GHz on LN2.

Continue reading: Intel's Core i7-5960X CPU has already been overclocked to 6.2GHz (full post)

Intel Core i7-5960X Haswell-E CPU spotted in leaked photos

Anthony Garreffa | Aug 17, 2014 11:27 PM CDT

It shouldn't be long until Intel officially launches its new X99 chipset along with a slew of new high-end processors, with the star of the Haswell-E show being the upcoming Core i7-5960X processor. This new CPU has been spotted in some newly leaked photos that Hermitage Akihabara got its hands-on.

Intel's new LGA 2011-based Haswell-E processors are expected to be released on August 29, with three models to be unveiled: the Core i7-5960X, the Core i7-5930K and the Core i7-5820K. The top-of-the-line Core i7-5960X will have eight physical cores and eight provided through Hyper-Threading for a total of 16 threads - a monster of a consumer CPU.

The new Core i7-5960X will also feature 20MB of L3 cache, quad-channel DDR4 RAM support, and 40 PCIe 3.0 lanes in total. The default clock speed on the Extreme CPU will be 3GHz, and it'll be built on Intel's 22nm process.

Continue reading: Intel Core i7-5960X Haswell-E CPU spotted in leaked photos (full post)

NVIDIA's new Denver-based Tegra K1 is 64-bit, very powerful

Anthony Garreffa | Aug 11, 2014 11:29 PM CDT

NVIDIA's Tegra K1 processor is quite the performance powerhouse, with a quad-core processor with four A15 CPUs, up to 2.3GHz clock speed, and a 192 Kepler-based GPU cores for the graphics side of things. We've seen the Tegra K1 power NVIDIA's cheap, but very powerful Shield Tablet, but the company is already showing off the next version of its SoC.

At HOT CHIPS, a technical conference in the world of high-performance chips, NVIDIA has unveiled more details on the 64-bit version of its Tegra K1 processor. The 64-bit Tegra K1 is powered by the 192-core Kepler GPU, with NVIDIA's own custom-designed 64-bit, dual-core "Project Denver" CPU, which is fully ARMv8 architecture compatible. The big shift here is that the Denver part of the Tegra K1 is a dual-core variant, with a clock speed of up to 2.5GHz, but is 64-bit capable. The current Tegra K1 is a quad-core chip, with 32-bit capabilities. This makes the 64-bit Tegra K1 the world's first 64-bit ARM processor for Android, demolishing the competition when it comes to performance.

NVIDIA has used some clever optimizations, as well as its advanced technology in its Denver CPU cores, to deliver performance from its dual-core Denver-based Tegra K1 that rivals even four or eight-core CPUs that we find in our mobile devices today. Better yet, The 64-bit Tegra K1 processor offers PC-class performance, extended battery life, better gaming and multi-tasking, and much more. NVIDIA will see its 64-bit Denver-based Tegra K1 processor baked into mobile devices later this year, with the company also teasing that it is already working on support for the upcoming release of Android L on its 64-bit Tegra K1.

Continue reading: NVIDIA's new Denver-based Tegra K1 is 64-bit, very powerful (full post)

Intel has made no delays for its 10nm process technology

Roshan Ashraf Shaikh | Jul 17, 2014 5:26 AM CDT

Intel is facing troubles with its schedule of its 14nm manufacturing process, however the chipmaker said that this won't affect 10nm fabrication's schedule. Intel may be under the pressure to reassure its investors as its postponed its 14nm processor production plans that was supposed to roll out from its Fab 42 plant in Arizona, USA. 10nm is scheduled for mass-production for 2016.

Intel's CEO Brian Krzanich said during its quarterly conference call with financial analysts and investors,"We have done no changes or shift to our 10nm schedule, but we will not really talk about 10nm schedules until next year". However, Intel didn't reveal details about the production of these chips.

This might be the reason why Intel may show-off its first 10nm wafer during the upcoming Intel Developer Forum 2014. The demonstration of these wafers should reinvigorate investor's faith in Intel's schedule and in its tick-tock strategy, despite 14nm delays. It is also rumoured that Taiwan-based semi-conductor maker TSMC is also making plans to fabricate 10nm chips, which may also pressure Intel to go ahead of schedule with its 10nm roadmap.

Continue reading: Intel has made no delays for its 10nm process technology (full post)

Intel at IDF: 14nm CPUs and 10nm wafers to be shown off

Anthony Garreffa | Jul 13, 2014 5:01 AM CDT

It looks like things could get quite good at the Intel Developer Forum (IDF) in September, according to DigiTimes' sources. These sources have said that Intel will show off its 14nm processors in September, but it will also be teasing its 10nm wafers at the event, too.

DigiTimes' sources said: "Intel will release its 14nm Core M-series processors in the fourth quarter and 14nm Broadwell-based processors in January 2015". Intel is expected weaker-than-expected yields, and has a lot of 22nm-based processors in its inventory, and mixed with poor PC demand right now, Intel has reportedly "postponed 14nm processor production, which is planned to be conducted at its Fab 42 in Arizona, the US", according to these sources.

According to these sources, we should expect TSMC to pump up the mass production of its 20nm process in Q3 2014, where it will announce its 16nm FinFET process in 2015, followed by a 10nm process that will enter mass production in 2016.

Continue reading: Intel at IDF: 14nm CPUs and 10nm wafers to be shown off (full post)

AMD Carrizo APU rumoured to use 28nm process and stacked DRAM

Roshan Ashraf Shaikh | Jul 13, 2014 4:24 AM CDT

It seems that AMD is working on a new APU using 28nm process and stacked DRAM, codenamed 'Carrizo'. It is said that these APUs will benefit from HBM (Higher Bandwidth Memory) implementation compared to current DIMM slot counterparts.

Though the reports are unconfirmed, it is known that AMD is collaborating with Hynix to make stacked DRAMs. The HBM provides higher bandwidth which will benefitted by the APU especially by the onboard graphics core. The APU will be made with 28nm process, but the onboard HBM die will be based on 20nm process. Its speculated that Carrizo's APU core die size is smaller than Kaveri.

HBM can provide maximum bandwidth of 128-256GB/s, which will prove to be a better implementation over DDR4 support. These APUs will most likely use the FM2+ socket and maintains 65w TDP envelope. If AMD incorporates on package DRAM solution, it will allow higher speeds for the memory and have lesser latency even compared to DDR4 implementation and it would cost lesser than integrating L3 cache. Whether the stacked DRAM be implemented in all of Carrizo APU lineups and feasibility especially for low-cost APUs is currently unknown.

Continue reading: AMD Carrizo APU rumoured to use 28nm process and stacked DRAM (full post)

IBM spends $3 billion on new R&D, will step away from using silicon

Anthony Garreffa | Jul 11, 2014 12:30 AM CDT

IBM thinks that the days of silicon are numbered, as it spends $3 billion over the next five years on finding ways to create the future generations of microprocessors. Senior VP of IBM Systems & Technology Group, Tom Rosamilia, says: "We really do see the clock ticking on silicon".

Right now, IBM's very latest silicon components are baked onto a 22nm process, but the company is looking five years into the future where parts will become so small that it will be hard to maintain a reliable on and off state. Rosamilia adds: "As we get into the 7 nanometer timeframe, things really begin to taper off".

This has IBM looking at new ways of making components work, funding this new set of research. The company has faith in an alternative to silicon, something known as carbon nanotubes. The concept of this technology still needs considerable work if it hopes to see the fabrication of carbon nanotube-based processors as an alternative to silicon. Another route that IBM could go into is silicon nanophotonics, which uses light instead of electrical signals to blast data around the chip.

Continue reading: IBM spends $3 billion on new R&D, will step away from using silicon (full post)