Broadcom's Road to $90 Billion revenue: The Q1 Signal in AI's Noise
How Hock Tan's careful distinction between "partners" and "customers" reveals both caution and ambition in the race to power trillion-parameter AI models
In 2016, when Broadcom (then still Avago Technologies) closed its $37 billion acquisition of the original Broadcom Corporation, industry observers wondered what CEO Hock Tan's endgame might be. The semiconductor industry was in the midst of consolidation, with giants like Intel dominating the x86 CPU market and Qualcomm controlling much of the mobile space. Tan's acquisition strategy seemed focused on buying reliable cash flows from mature markets rather than pursuing cutting-edge innovation.
What wasn't clear then – but has become increasingly evident since – was that Tan was assembling a unique collection of semiconductor assets that would position Broadcom for an AI revolution few anticipated.
Fast forward to today, and Broadcom sits at a fascinating intersection of the AI revolution, with a radically different approach from both Nvidia's merchant model and the internal chip development efforts of hyperscalers like Google. The company's recent earnings calls reveal not just impressive financial performance – $4.1 billion in quarterly AI revenue, up 77% year-over-year – but a distinct philosophical approach to silicon in the AI era.
Beyond General Purpose
The triumph of the x86 architecture through the PC and server eras represented a victory for standardization – a single instruction set powering everything from laptops to data centers. Even early AI workloads ran on general-purpose GPUs, with Nvidia emerging as the dominant provider through its CUDA software ecosystem and increasingly specialized but still general-purpose accelerators.
Broadcom is betting on the pendulum swinging hard in the other direction, at least for the most demanding AI workloads.
As Hock Tan explained in his most recent earnings call:
This perspective fundamentally challenges the conventional wisdom of the AI chip market. While Nvidia builds increasingly capable but still general-purpose GPUs that can run many different AI models, Broadcom is designing custom XPUs (accelerators) optimized for specific hyperscale customers and their particular frontier AI models.
The difference isn't just philosophical – it reflects a deeper understanding of how AI infrastructure scales. As models grow to trillions of parameters, the systems to train them require not just raw compute power but precisely optimized architectures that balance compute, memory, and networking in ways specific to each model architecture.
The Partner-Customer Distinction
Perhaps the most revealing aspect of Broadcom's recent communications is Tan's careful distinction between "partners" and "customers" in the AI space. The company currently has three hyperscale customers deploying its custom accelerators at scale, with four additional hyperscalers in "advanced development" for their own custom AI accelerators.
This distinction goes beyond mere terminology. It reveals Broadcom's commitment to depth over breadth in its strategy:
"We only provide very basic fundamental technology in semiconductors to enable these guys to use what we have and optimize it to their own particular models and algorithms that relate to those models. That's it. That's all we do."
It's a throwback to an earlier era of semiconductor design, when custom ASICs were common, but updated for the extreme scale and complexity of modern AI systems.
This approach creates deep competitive moats. Once a hyperscaler has co-developed a custom accelerator with Broadcom and integrated it into their AI infrastructure, switching costs become enormous. The relationship is no longer transactional but symbiotic – the hyperscaler's software and Broadcom's hardware evolve together over multi-year roadmaps.
The Networking Differentiator
The true genius of Broadcom's positioning becomes apparent when you consider the full stack of AI infrastructure.
As AI clusters grow larger, the networking fabric connecting accelerators becomes increasingly critical. A frontier model trained on a million-node cluster requires not just powerful individual nodes but a sophisticated networking architecture to minimize latency and maximize throughput during distributed training.
This is where Broadcom's long history in networking technology becomes a decisive advantage. As Hock Tan noted:
"We have doubled the radix capacity of the existing TomOn five. And beyond this, to enable AI clusters to scale up on Ethernet towards 1,000,000 XPUs, we have taped out our next generation 100 terabit Tomahawk six switch running 200 gs SerDes at 1.6 terabit bandwidth."
While competitors focus primarily on computing power, Broadcom understands that scaling AI to its full potential requires both custom computing and advanced networking. As clusters grow, the networking portion of AI infrastructure spending increases nonlinearly – from about 5-10% of AI content in today's clusters to potentially 15-20% or even up to 30% in million-node clusters.
The Software Connection
The VMware acquisition, completed in early 2024, adds another dimension to Broadcom's AI strategy. While initially viewed as a financial engineering move typical of Tan's approach, the VMware business has been transformed into an AI enabler for enterprises.
By developing the "VMware Private AI Foundation" in collaboration with Nvidia, Broadcom is creating a path for enterprises to run AI workloads on-premises while virtualizing GPUs alongside traditional CPUs. This addresses growing concerns about data sovereignty and privacy in the AI era – concerns that are pushing more enterprises to consider keeping their most sensitive AI workloads in-house rather than in the public cloud.
The success of this integration has been remarkable, with VMware's operating margin reaching 70% (up from less than 30% pre-acquisition) and spending reduced from $2.4 billion per quarter to $1.2 billion. More importantly, it positions Broadcom to capture enterprise AI spending beyond its hyperscale focus.
The Scale-Out Future
Broadcom's long-term vision revolves around the belief that hyperscalers will need to deploy clusters of 1 million AI accelerators by 2027. This isn't just aggressive forecasting – it reflects a deep understanding of where AI is headed.
Training ever-larger frontier models requires exponential increases in computing power. The transition from scale-up architectures (where a single server contains multiple accelerators) to scale-out architectures (where thousands or millions of accelerators work together across a network) creates exactly the intersection of computing and networking where Broadcom excels.
This vision translates to a serviceable addressable market of $60-90 billion from just three hyperscale customers by fiscal 2027. The conservative approach of not counting potential revenue from the four additional hyperscale partners in development gives Broadcom significant upside potential beyond even these ambitious projections.
The Execution Challenge
Custom silicon development is inherently complex and resource-intensive. Supporting multiple custom accelerator designs simultaneously for different customers stretches engineering resources and creates potential for delays or technical missteps.
The company's faces concentration risk. With just three hyperscalers driving the bulk of AI revenue today, any change in deployment schedules or strategic direction by these customers could significantly impact Broadcom's growth trajectory.
There's also the question of whether the pendulum might eventually swing back toward standardization. If Nvidia or other competitors can develop sufficiently flexible architectures that approximate the performance of custom designs without the development overhead, Broadcom's approach could lose some of its appeal.
The Multi-Year Journey
What makes Broadcom's strategy particularly compelling is its alignment with the true pace of AI development. While public discourse around AI tends to focus on week-to-week or month-to-month developments, the infrastructure supporting these advances operates on multi-year development cycles.
As Hock Tan emphasized: "For each of them, this represents a multi-year, not a quarter-to-quarter journey."
This longer-term perspective allows Broadcom to make strategic investments in next-generation technologies like 2nm AI accelerators and 100 terabit networking switches, positioning the company for sustained leadership even as AI architectures evolve.
The subtly more measured tone in Broadcom's recent communications suggests the company is transitioning from the vision phase to the execution phase of its AI strategy. After laying out ambitious targets for 2027, management now emphasizes quarterly performance and careful distinction between existing customers and potential future revenue sources.
This tonal shift doesn't indicate reduced confidence in the AI opportunity, but rather a recognition that the path from today's $4.1 billion quarterly AI revenue to a $60-90 billion annual opportunity will include both exponential growth phases and periodic plateaus as customers digest deployments.
For companies building AI infrastructure, for investors trying to separate hype from sustainable advantage, and for anyone trying to understand how this technological revolution will unfold, Broadcom's approach offers a fascinating counterpoint to conventional wisdom. The company is betting that the future of AI doesn't just belong to general-purpose computing but to deeply customized, tightly integrated systems designed for specific model architectures.
If they're right, Broadcom's seemingly disparate collection of networking, optical, and custom silicon assets will prove to have been a prescient assembly of precisely the components needed for AI's next phase.
Whether this approach ultimately dominates the AI era remains to be seen, but Broadcom's journey from industry consolidator to AI enabler represents one of the most remarkable strategic pivots in semiconductor history.
Key Charts
Revenue by segment
Revenue by product and subscription
Valuation – Forward P/E
Valuation Forward EV/ EBIT
Disclaimer: The views in the post are for for informational purposes only and should not be considered as investment advice. Please contact your RM or Kristal.AI for investment advise.
By
Kristal Investment Desk
March 20, 2025
Liked it? Share it with your friends & colleagues!
We encourage our India investors to use a financial guide. Kristal does not charge any additional fees for investing through them.
In case you already have a guide, we will try to bring them onboard. In case not, we can recommend one of our qualified partners to advise you through the journey.
This is offered only to Accredited and Institutional Investors as defined under the Securities and Futures Act, Chapter 289 of Singapore (“Act”), which broadly comprises of regulated financial Institutions, large corporates, high net worth individuals and sophisticated investors.
An Accredited Investor is an individual
Whose net personal assets exceed in value SGD 2 million (or it’s equivalent in a foreign currency) with value of his/her primary residence capped at SGD 1 million, or
Whose financial assets (net of any related liabilities) exceed in value SGD 1 million (or it’s equivalent in a foreign currency), or
Whose income in the preceding 12 months is not less than SGD 300,000 (or it’s equivalent in a foreign currency)
I agree to opt-in as Accredited Investor and will submit required documentation to confirm the same.
Proceed as Private Wealth
« BACK
Barrier Reverse Convertible (BRCs)
It is a structured product issued at par. In working it is similar to a Reverse Convertible but includes the barrier feature to protect downside to some extent. The underlying can be a basket of shares where the worst performing share may be delivered on expiry at the strike price.
It’s a structured product which is similar to ELONs, except the underlying can be a basket of stocks. This means that in addition to normal ELON factors, there are additional knock-out, knock-in rules associated with them.
An equity linked note which is issued at par. The payment is made on initiation, by the client, in the form of shares that he/she might already hold and wants to unlock more returns especially during the periods when the stock is not giving any dividends.