Artificial Intelligence - Page 11
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 11
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Samsung preps for 3-4 year long-term competition with TSMC on its next-gen 2nm process node
Samsung Electronics is reportedly focusing on substance over speed when it comes to its bleeding-edge 2nm process node, with the South Korean company "preparing for a 3-4 year long-term competition with TSMC".
In a new report from DigiTimes picked up by insider @Jukanrosleve on X, we're hearing that the launch of Samsung's new 2nm process node is expected to launch in the second half of this year, behind TSMC in yield, but "steadily identifying areas for improvement".
US chip giants Apple, AMD, and NVIDIA are running to TSMC to have their next-gen 2nm chips made at the Taiwanese semiconductor fabs, but Samsung plans to leverage its improved yields and cost-effectiveness by 2026 to prepare for the long game.
Foxconn will begin adopting NVIDIA's next-gen Vera Rubin AI servers, expected in 2H 2026
Foxconn is pushing hard into its AI server business, after pumping out NVIDIA GB200 AI servers in Q2 2024, and beginning production of NVIDIA's new GB300 AI servers, it's also preparing for NVIDIA's next-gen Vera Rubin AI servers starting this month.
In a new report from UDN picked up by insider @Jukanrosleve, the Chinese manufacturer expects NVIDIA's next-gen Vera Rubin AI servers to be the main product to drive performance growth in 2026 to 2027. Foxconn boss Liu Yangwei has previously said that the company has always been a major customer's co-development partner when it comes to AI servers, and that "major customer" is NVIDIA.
Foxconn participates in the development of next-gen products with NVIDIA, with supply chain sources saying that according to Foxconn's shipping schedule, GB200 AI servers are being shipped this year, next-gen GB300 AI servers have already started small-scale production, and will become its main product for AI server shipments in the first half of 2026.
NVIDIA's new B30 AI GPU planned to ship in Q4: 10-20% slower than H20, but 30-40% cheaper
NVIDIA's upcoming B30 AI GPU will begin shipping in Q4 of this year, with performance expected to be around 10-20% slower than the H20, while being 30-40% cheaper.
In new reports from Taiwanese outlet Ctee, we're hearing that thanks to the US government allowing NVIDIA to resume exporting its H20 AI GPUs into China, the market expects to see shipments in the last 6 months of the year hitting 400,000 units, with the server cooling supply chain to benefit from this.
NVIDIA originally launched the downgraded H20 AI GPU in response to previous US export restrictions, but with those loosening on the tail end of 2024, it saw NVIDIA experience $4.5 billion in lost revenue from China earlier this year.
NVIDIA's next-gen GB300 AI servers now in production, will begin shipping in September
NVIDIA's next-gen GB300 AI servers have entered production, with the new GB300 "Blackwell Ultra" AI servers to begin shipping in September... on time, and ready to rock and roll.
In a new report from DigiTimes picked up by insider @Jukanrosleve on X, we're hearing that NVIDIA's new GB300 "Blackwell Ultra" AI servers have entered production according to supply chain sources. Industry sources add that they expect a smooth production trajectory into the second half of 2025, which is said to be from a strategic shift that's making it easier for AI server manufacturers.
NVIDIA decided to reuse the motherboard design from its current GB200 platform -- known as the Bianca board -- for its new GB300 platform. This move has significantly shortened the learning curve for suppliers, many of which were struggling to keep up with NVIDIA's incredibly fast product update cycle in the past. One ODM representative noted: "there are no major issues with the GB300 at this stage. Shipments should proceed smoothly in the second half".
Zuckerberg confirms multiple GW AI clusters: Prometheus in 2026, 5000MW+ Hyperion in the future
Meta is pushing into the AI supercomputer space in a big way, with Mark Zuckerberg saying that the social media giant plans to add over 5GW of AI compute power in the years ahead.
Zuckerberg explained on his Threads post: "For our superintelligence effort, I'm focused on building the most elite and talent-dense team in the industry. We're also going to invest hundreds of billions of dollars into compute to build superintelligence. We have the capital from our business to do this".
The Meta CEO continued: "We're actually building several multi-GW clusters. We're calling the first one Prometheus and it's coming online in '26. We're also building Hyperion, which will be able to scale up to 5GW over several years. We're building multiple more titan clusters as well. Just one of these covers a significant part of the footprint of Manhattan. Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher. I'm looking forward to working with the top researchers to advance the frontier!".
NVIDIA's new China-specific RTX 6000D rumored, expected to ship 2 million units in 2025
NVIDIA CEO Jensen Huang is in China right now, with news that the company is preparing to launch its new RTX 6000D AI GPU with the card expected to ship 2 million units in 2025.
NVIDIA has confirmed its new RTX 6000D will launch in Q3 2025, manufactured on TSMC's 4nm process node, and a shipment target of around 2 million units before the end of 2025, filling a revenue gap of over $10 billion according to a new report from DigiTimes picked up by insider @Jukanrosleve on X.
The new RTX 6000D and the Blackwell AI GPU series have driven 4nm production capacity at TSMC to "unprecedented levels" which has significantly contributed to its revenue. The US government banned NVIDIA's Hopper H20 AI GPU earlier this year, causing the company to immediately recognize $5.5 billion in losses, but the H20 is now ready to ship to China again, as well as the company preparing the new RTX 6000D card for the country, too.
SK hynix supplies 'early' HBM4 samples, testing will take longer than HBM3E for AI chip makers
SK hynix started supplying its next-gen HBM4 memory in March 2025, but they are "early" versions of its HBM4, with qualification tests expected to take longer than the same tests for HBM3E.
The reason for HBM4 qualification tests taking longer is due to generational changes from HBM3E, where HBM3E memory samples were sent in a nearly complete state. SK hynix sent over HBM3E samples to its clients in August 2023, with mass production starting in March 2024, just 7 months between sample supply and mass production.
HBM4 in its early state will require modifications, with projections that it could take longer than 7 months, with increased technical difficulty with the generational changes cited as a factor that could extend the qualification testing period. The number of I/O terminals for HBM4 has doubled from HBM3E (2048 for HBM4, 1024 for HBM3E). Additionally, HBM4 -- for the first time in this generation -- has the logic die (base die) produced at a foundry (TSMC).
NVIDIA's new B30 AI GPU for China expected to have significant demand, 75% as fast as the H20
NVIDIA's new China-specific B30 AI GPU has performance of around 75% of the H20 AI GPU, while demand for the new B30 is "significant" according to the latest reports.
In a new post on X by insider @Jukanrosleve, we're hearing from China's major internet companies that estimates that the performance of NVIDIA's new B30 AI GPU is "approximately 75% that of the H20". Chinese tech companies have reportedly placed orders for hundreds of thousands of units -- orders of over $1 billion -- in late-June, with deliveries expected in August.
Another large Chinese tech company reportedly plans to increase its Q3 2025 capital expenditure and intends to order 300,000 orders of NVIDIA's new B30 AI GPU, with a delivery schedule for September.
SK hynix to change wafer cutting for HBM4 memory and 400-layer NAND flash, pushing new limits
SK hynix is reportedly changing its wafer cutting process for next-generation memory manufacturing, paving the way for its new HBM4 and 400-layer and higher NAND flash memory as they become increasingly thin, pushing existing processes to their absolute limits.
In a new story from ETnews picked up by insider @Jukanrosleve on X, industry sources have said that SK hynix plans to introduce femto-second grooving and full-cut processes for HBM4 wafer cutting. The news was confirmed by the South Korean memory manufacturer in discussing a Joint Evaluation Project (JEP) for new wafer cutting equipment with laser equipment partners.
It's reported that technology tests are already underway with some partners, with an industry official saying: "SK hynix is planning a major change to its existing wafer cutting methods and is discussing numerous technical solutions with partners".
NVIDIA's new B30 AI GPU won't be sold before September, Chinese companies testing samples now
NVIDIA's new B30 AI GPU won't be arriving until September, with Chinese customers needing to wait a couple of more months according to the latest report.
In a new report published by the Financial Times picked up by insider @Jukanrosleve on X, we're hearing that the NVIDIA B30 AI GPU not being sold before September is because NVIDIA asked for prior assurance from the Trump administration that it wouldn't be breaching the new US export regulations, and then seeing its new B30 banned after it is introduced.
NVIDIA's new B30 AI GPU specifications could change before now and September, depending on its discussions with the US government, and if the specifications change, Jukan says that it might primarily involve enabling NVLink. The new B30 AI GPU is rumored to have NVLink disabled, making it more of a modified RTX PRO 6000 workstation GPU (as it doesn't have HBM, and uses GDDR7 instead).
Meta paid an insane $200 million signing bonus to secure Apple's head of foundation AI models
Meta wants to be one of the major leaders in AI and is spending big to get there, recently acquiring Apple's head of foundation AI models and paying him a massive $200 million signing bonus.
Apple's now former AI models executive, Ruoming Pang, left Apple for Meta with Bloomberg reporting the social media giant paid Pang a wallet-busting $200 million signing bonus. The $200M compensation package from Meta is for Pang to work in its new superintelligence labs, with a base salary, signing bonus, and Meta shares. The stocks issued to Pang are the biggest part of his package, but we don't have a percentage breakdown to see where that $200 million went.
Meta's huge $200 million deal to secure Apple's former AI boss is expected to compensate for his potential lost income, including the new signing bonus and annual income, which aims to offset any lost opportunities if he resigns from Meta and misses out on important stock bonuses
Chinese AI companies plan new facility in China with 115,000 NVIDIA AI GPUs, even with chip ban
Chinese AI companies want to secure 115,000 NVIDIA AI GPUs to power new data centers in the desert, in the middle of US export restrictions stopping high-end AI chips from entering China.
In a new report from Bloomberg, we're hearing that futuristic structures are data centers that Chinese AI companies want to equip with high-end American semiconductors, chips that the US government doesn't want China to obtain. Bloomberg News has analyzed investment approvals, tender documents, and company filings that show Chinese AI companies aim to install over 115,000 of NVIDIA's high-end AI GPUs.
The companies want to install the AI chips in over 36 data centers across China's western deserts, with operators in Xinjiang planning to house most of the AI chips in a single compound, which, if achieved, could be used to train foundational LLMs (large language models) like those of Chinese AI startup DeepSeek.
NVIDIA AI GPUs used as collateral for loans, startup secures $10B in funding with AI chips
NVIDIA AI GPUs are being used as collateral for huge loans, with UK-based startup Fluidstack using its arsenal of NVIDIA AI chips to secure over $10 billion in loans.
In a new report from The Information picked up by insider @Jukanrosleve on X, we're hearing that in the past CoreWeave pioneered a new financial model by raising $9.9 billion in funding through GPU-backed financing to purchase AI chips and leasing them out to clients including OpenAI, effectively "paying off debt with GPUs".
Led by CoreWeave, multiple AI cloud computing startups are expanding their financing using high-performance AI chips as collateral, with the total loan volume exceeding $20 billion. However, there are potential risks involved with this financing model because of the short product lifecycle of NVIDIA GPUs which sees the AI chips depreciating rather quickly.
Research indicates heavy AI users are burning out at work and 'twice as likely to quit'
Some new research is suggesting that AI may cause burnout in employees who heavily use the tech.
A survey from the Upwork Research Institute (spotted by ZDNet) drew some interesting conclusions and highlighted a big difference in the impact of AI usage in staff members employed at a company versus freelancers.
There's a clear enough message that AI can help drive better productivity, with 77% of executives saying that they had observed gains in that department thanks to the tech, and employees estimating that they're 40% more productive using AI tools.
Grok is now calling itself 'MechaHitler' in a new rampant hop of the guardrails
The artificial intelligence chatbot Grok has received a new update from its creators at xAI and has now been caught spouting antisemitic posts across X. The creators of the chatbot have since recognized the problem and responded.
The latest update to Grok is the fourth iteration of the new chatbot, which went live on July 9. Shortly after the update was pushed out, users began prompting the newly upgraded chatbot with a range of different questions, some of which probed to see where the new guardrails for its responses were set. Users were surprised when Grok began posting antisemitic responses to seemingly blank-slated questions, with the chatbot even going as far as to call itself "MechaHitler" and praise Hitler.
Grok jumping over its guardrails with this new update comes after xAI founder Elon Musk publicly stated he was unhappy with how the AI chatbot answered questions, saying Grok produced answers that were too "woke". On Friday, Musk said that Grok had been "improved significantly," and users would notice a difference after the July 9 update was pushed out.
Meta recruits Apple's top AI engineer in multi-million dollar deal, enhancing AI supremacy
Meta has just poached Apple's best AI engineer, another dagger into the heart of Apple's efforts into AI, of which it has been failing since its introduction with Apple Intelligence.
In a new report from Bloomberg, we're hearing that Ruoming Pang, a distinguished engineer, and manager in charge of Apple's foundation models team, is leaving the company and joining Meta's new superintelligence group, "according to people with knowledge of the matter".
Meta offered Pang a deal he couldn't refuse: a huge package worth tens of millions of dollars per year, as Meta boss Mark Zuckerberg has been on an AI hiring spree, pulling in major AI leaders including Scale AI's Alexandr Wang, startup founder Daniel Gross, and former GitHub CEO Nat Friedman "with high compensation", adds Bloomberg.
US secretary of state impersonated by AI: foreign ministers and Congress members contacted
Artificial intelligence-powered tools are causing a growing problem of impersonation, and one example can be added to the seemingly growing pile of cases where AI is used to assume the identity of an individual. That example is US Secretary of State Marco Rubio.
A new report from The Washington Post has revealed an imposter pretending to be Rubio contacted several foreign ministers, a US governor, members of Congress, and other officials by sending them voice and text messages that mimicked the voice of Rubio, and somewhat more impressively, his writing style. Currently, authorities don't know who is behind the impersonation attempt, or what the end goal was of the attempts at contacting the government officials.
However, authorities do believe the goal of the impersonation was for the person behind it to gain access to government information or accounts. The Washington Post cites an unknown senior US official and a State Department cable for the source of news, with the source saying the imposter "contacted at least five non-Department individuals, including three foreign ministers, a U.S. governor, and a U.S. member of Congress."
Samsung Foundry stakes survival on 2nm process node with a new special directive to fight TSMC
Samsung Electronics is struggling with its foundry division, but it is now pushing towards a special directive that sees Samsung Foundry staking its survival on its new 2nm process node.
In a new post by Chosun picked up by insider @Jukanrosleve on X, we're hearing that Samsung Foundry is set to lead the advanced semiconductor market later this year and into 2026, with the South Korean firm setting a goal to secure large tech companies by raising the 2nm process yields to 70% within the year.
Samsung Foundry has been recording trillions of won in operating losses quarter after quarter, while its new Taylor, Texas plant is set to spin up into operations in 2026. The company has been struggling with low utilization rates, with concerns that it continues to struggle getting orders from larger customers after its US-based foundry is established, the scale of its losses could get out of control and start eroding the company's entire operating profit.
US mulls AI chip restrictions for Malaysia and Thailand, to stop flow of AI chips to China
The US Commerce Department is exploring new AI chip export restrictions that would stop AI chips from being smuggled through Southeast Asia and into China.
In a new report from Bloomberg, we're hearing that the US Commerce Department is seeking to close the loopholes that are seeing the flow of AI chips into China. We've seen Chinese companies using loopholes like renting AI GPUs or accessing them through Southeast Asian countries like Malaysia and Thailand.
The US Commerce Department is preparing a draft for China that would see new restrictions on accessing AI chips through backdoors in Malaysia and Thailand, with claims that both countries would be slapped with increased restrictions. One of the new methods would be allowing AI chip exports into nations only if they're used by those companies that have headquarters in the United States and are operating subsidiaries in Malaysia and Thailand.
Elon Musk says xAI is buying an overseas power plant, and shipping it to the United States
Elon Musk's xAI will be buying an overseas power plant and shipping the entire thing to the United States, so that it can use the additional power to drive its new AI data center.
Dylan Patel from SemiAnalysis outlined xAI's recent progress in a podcast, with Elon Musk himself confirming it with a simple reply of "accurate". It's definitely quite interesting that xAI would be buying a power plant from overseas and having it shipped to the US, but Elon's AI startup needs as much power as it can get.
The Colossus AI supercomputer is one of the fastest supercomputers on the planet, with around 200,000 of NVIDIA's new H200 AI GPUs and consumes an insane 300MW of power... which has had xAI struggling to power the supercomputer as it is, and that's why Elon is buying a power plant from overseas.





















