AI

Trump’s $80 Billion Nuclear Investment and Google’s TPU Counterattack – The AI Power War Begins

Editor
6 分で読む

The power crisis brought on by the AI boom is becoming a reality. As of November 30, 2025, news has emerged that the Trump administration in the United States has announced plans to construct eight large-scale nuclear power plants, worth $80 billion (approximately 117 trillion won), to supply power to AI data centers. Simultaneously, Google has declared its intention to sell its TPU chips, previously used internally, to external parties, causing a seismic shift in the AI semiconductor market. In contrast, Korea faces a red light in securing competitiveness as it decides to exclude exceptions to the 52-hour workweek in the semiconductor special law.

Trump's $80 Billion Nuclear Investment and Google's TPU Counterattack - The AI Power War Begins
Photo by Igor Omilaev on Unsplash

The most striking aspect of this news is the scale of the U.S. nuclear investment. The plan is to build eight AP1000 large nuclear reactors in collaboration with Westinghouse, a number I personally find quite impressive. Each reactor can produce 1100 MW of power, allowing for a total supply of 8800 MW. This is enough to power approximately 4 million households or operate multiple large-scale AI data centers simultaneously.

In fact, the power consumption of AI data centers is beyond imagination. It is known that a conversational AI like ChatGPT consumes more than ten times the power of a regular Google search. Moreover, when training large models like GPT-4 or Gemini, thousands of GPUs need to run for months, leading to enormous power consumption. The Trump administration’s nuclear investment seems to be a decision born out of this practical necessity.

An interesting point is that part of the funding for this investment comes from Japan’s $550 billion investment pledge to the U.S. It is interpreted as a strategic approach to secure energy security and AI competitiveness within the U.S.-Japan alliance. Energy investment firms like Brookfield and Cameco have described it as the “largest scale in decades,” indicating the project’s massive scale.

Google TPU’s Counterattack – Cracking NVIDIA’s Monopoly

Meanwhile, more exciting changes are occurring in the AI semiconductor market. Google announced that it would sell its TPU (Tensor Processing Unit), previously used only for its cloud services, to external companies like Meta. This is interpreted as the emergence of a serious competitor in the AI chip market, which NVIDIA dominates with over 90% market share.

The performance of Google’s TPU is already proven. Recently, Gemini 3.0, trained and inferred solely with Google’s TPU without NVIDIA GPUs, topped the LM Arena leaderboard with a score of 1501. This means it outperformed other models trained with NVIDIA’s latest H100 or H200 GPUs. The market reacted immediately, with Alphabet’s stock price surging 6.28% in a single day on the 24th, closing at $318.47.

It is also noteworthy that Meta is discussing a multi-billion dollar investment to introduce Google’s TPU into its data centers starting in 2027. Meta has heavily relied on NVIDIA GPUs, but considering a partial switch to Google’s TPU suggests significant appeal in terms of cost efficiency and performance. Personally, I expect this move to promote healthy competition in the AI chip market.

For NVIDIA, this situation must be quite burdensome. They have effectively monopolized the AI chip market with GPUs like the H100 and H200, and now a formidable competitor like Google is entering the market. Moreover, Google can offer integrated solutions as it possesses not only hardware but also a software ecosystem. Machine learning frameworks like TensorFlow and JAX, developed by Google, are optimized for TPU, which is a significant advantage.

However, NVIDIA is unlikely to remain idle. The strength of the CUDA ecosystem and its developer-friendly environment remain NVIDIA’s significant advantages. Most AI researchers and developers are familiar with CUDA, and existing codes are written based on CUDA, making the transition to TPU costly and time-consuming. Ultimately, the competition will likely consider performance, cost efficiency, and development convenience comprehensively.

The Dilemma of the Korean Semiconductor Industry – Regulatory Shackles vs. Global Competition

In this fiercely competitive global AI landscape, bad news has emerged for the Korean semiconductor industry. The ruling and opposition parties have agreed to exclude the industry’s core demand for exceptions to the 52-hour workweek from the semiconductor special law. The semiconductor industry is expressing concerns that this will inevitably impact R&D competitiveness.

Due to the nature of semiconductor R&D, experiments often run continuously for 24 hours, and real-time collaboration with overseas partners is necessary, but the 52-hour workweek restricts such work patterns, according to the industry. Especially when competitors like the U.S. and China are fully committed to AI semiconductor development, concerns are rising that Korea could fall behind in technology if it is hampered by regulations.

Memory semiconductor companies like Samsung Electronics and SK Hynix may endure due to their existing technology and market position, but the situation could be different in new technology fields like AI semiconductors. If Korean companies are the only ones constrained by the 52-hour workweek while companies like NVIDIA and Google are dedicated to development 24/7, the likelihood of falling behind in competition increases.

Of course, worker protection is also an important value. However, a flexible approach considering the semiconductor industry’s uniqueness may be necessary. For example, exceptions could be made for the R&D sector, while strengthening measures for worker compensation or rest. There is discussion about adding a supplementary opinion to the Democratic Party’s original bill, stating that efforts will be made in the National Assembly to consider the realities of the semiconductor industry’s R&D regarding working hours, suggesting room for future improvement.

In fact, regulatory issues are not unique to Korea. The European Union is also imposing restrictions on AI development through AI regulatory laws, and discussions on AI safety regulations are active in the U.S. However, finding a balance between regulation and innovation seems crucial in highly competitive technology fields. If too lenient, safety issues arise, and if too strict, competitiveness may be lost.

Personally, I hope Korea does not miss the opportunity to expand the technology and manufacturing experience accumulated in memory semiconductors into the AI semiconductor field. The success of Google’s TPU shows how important an integrated approach combining software and hardware is. I wonder if Korean IT companies like Naver and Kakao could collaborate with semiconductor companies to develop a Korean-style AI chip.

Ultimately, synthesizing these news items, the most important takeaway seems to be that global competition surrounding power and semiconductors, the core infrastructure of the AI era, is intensifying. The U.S. is trying to solve the power issue with nuclear power, and Google is challenging NVIDIA’s monopoly with its own chips. In this situation, the strategy Korea adopts will likely determine its future AI competitiveness. The balance between regulation and innovation, and the cooperation between government and private sectors, seems more crucial than ever.

#Westinghouse #Alphabet #NVIDIA #Meta #SamsungElectronics #SKHynix #KoreaElectricPower


This article was written after reading a news article, with personal opinions and analysis added.

Disclaimer: This blog is not a news outlet, and the content is the author’s personal opinion. The responsibility for investment decisions lies with the investor, and no responsibility is taken for investment losses based on this article’s content.

Editor

Leave a Comment