The Great Decoupling of 2025
Nvidia is no longer the undisputed king of the semiconductor hill. On this Tuesday, December 09, 2025, the market is digesting a massive geopolitical pivot. President Trump has signed off on high performance H200 chip exports to China, according to Bloomberg reports this morning. While this news initially looks like a lifeline for Nvidia, the 25 percent export surcharge and the maturation of domestic Chinese alternatives like the Huawei Ascend 910C have turned this into a defensive play. Investors are looking elsewhere. The smart money has moved from the “brain” of AI to the “plumbing” that holds it together. Broadcom (AVGO) is the company now commanding the premiums that Nvidia once claimed exclusively. The transition from AI training to AI inference has shifted the power balance from general purpose GPUs to custom silicon.
The F.L. Putnam Tilt Toward Inference
Ellen Hazen, Chief Market Strategist at F.L. Putnam, signaled this transition months ago. Her thesis is simple; training an AI model is a one-time capital expense, but running that model, or inference, is a recurring operational expense. Broadcom is the undisputed leader in Application Specific Integrated Circuits (ASICs) that handle inference with far greater energy efficiency than Nvidia’s Blackwell architecture. While Nvidia has struggled with the law of large numbers in late 2025, Broadcom has leveraged its VMware integration to create a full stack AI private cloud. This is why Broadcom has outperformed Nvidia by nearly 15 percent on a year to date basis as of this morning’s opening bell.
Custom Silicon and the $21 Billion Anthropic Order
The technical mechanism driving Broadcom’s dominance is the shift toward proprietary hardware. Hyperscalers like Google and Meta are no longer content to pay the “Nvidia tax” for off the shelf chips. Broadcom’s recent fourth quarter earnings report, as filed with the SEC, confirms that AI revenue soared 74 percent to $6.5 billion. A significant driver was the confirmation of a massive $21 billion order from Anthropic for the “Ironwood” TPU series. These are not general purpose chips. They are custom designs tailored specifically for the Claude 3.5 and Claude 4 models, providing a performance per watt advantage that Nvidia’s H200 and Blackwell chips simply cannot match.
Breaking Down the Hardware Economics
Why does the enterprise prefer Broadcom right now? It comes down to networking and latency. As data centers scale to 100,000 GPU clusters, the bottleneck is no longer the compute power of a single chip; it is the speed at which data moves between them. Broadcom’s Tomahawk 5 and upcoming Tomahawk 6 Ethernet switches are the backbone of the modern AI data center. While Nvidia pushes its proprietary InfiniBand technology, the world is moving toward Open Ethernet. Broadcom owns the Ethernet market. This creates a moat that is harder to cross than Nvidia’s software moat, CUDA. Once a data center is built on Broadcom’s switching fabric, switching to a competitor is a billion dollar headache.
Investors should look at the fundamental metrics of these two giants. Nvidia is trading at a forward P/E that assumes perpetual triple digit growth, a feat that is becoming impossible as the law of large numbers takes hold. Broadcom, conversely, offers a diversified revenue stream from enterprise software and traditional networking, making it a safer harbor during the current partial government shutdown volatility.
Comparing the 2025 Semiconductor Leaders
The following data reflects the market close as of yesterday, December 08, 2025. It highlights the divergence in valuation and efficiency between the top three AI hardware providers.
| Metric | Broadcom (AVGO) | Nvidia (NVDA) | Marvell (MRVL) |
|---|---|---|---|
| YTD Performance | +47.2% | +32.1% | +28.5% |
| AI Revenue Growth (YoY) | 74% | 62% | 41% |
| Forward P/E Ratio | 31.4 | 44.8 | 38.2 |
| Dividend Yield | 1.45% | 0.03% | 0.35% |
The Shift from Training to Inference
During the 2023 and 2024 hype cycles, the market cared about training. Large Language Models required massive clusters of Nvidia H100s. But in the late 2025 economy, the focus has shifted to monetization. Companies like Microsoft and Salesforce are now optimizing their cost per query. This is where Broadcom’s custom XPUs shine. According to recent white papers shared by Reuters, custom ASICs can run inference at 40 percent less power than general purpose GPUs. In a world where energy constraints are the primary limit on AI growth, the more power efficient chip wins the long game.
Furthermore, Broadcom’s acquisition of VMware has proven to be a masterstroke. By integrating AI networking at the virtualization layer, they have created a proprietary ecosystem that rivals Apple in its stickiness. Enterprise customers are no longer just buying a chip; they are buying a software defined data center that is optimized for the next generation of generative AI agents. This is the “tilt” F.L. Putnam identified. It is a move from the volatile, cyclical world of hardware sales to the stable, high margin world of integrated infrastructure.
The next major milestone to watch will be the January 2026 CES announcements. We expect the first hardware specifications for the Google TPU v6 and the Meta MTIA v3 to be leaked or officially announced. If Broadcom is confirmed as the lead designer for both, the current valuation gap between Broadcom and Nvidia will likely close entirely. Watch the 10 year Treasury yield closely; if rates continue to stabilize, the capital expenditure cycle for custom silicon will accelerate even further into the first quarter of the new year.