Beyond Nvidia: Broadcom, ASML & the Custom AI Chip Race
For the last two years, the AI chip story has sounded like one-name karaoke: Nvidia, Nvidia, Nvidia. The company, in fact, still deserves top billing. Nvidia reported $68.1 billion in revenue for the fourth quarter of fiscal 2026, including $62.3 billion from data center. For the full year, revenue reached $215.9 billion.
But by March 2026, the AI chip market looks much wider. Broadcom’s AI revenue reached $8.4 billion in its latest quarter, ASML posted record 2025 sales of €32.7 billion, and Meta has outlined four new generations of in-house AI chips. Nvidia remains central, but the stage is now crowded.
Modern AI infrastructure needs more than one type of chip. Training giant models is one job. Running them millions of times a day for search, chatbots, code tools, and recommendation systems is another. That second job, AI inference, is pushing companies to look for lower cost, lower power, and more specialized hardware.
Google says its TPUs are custom-designed AI accelerators that already power Gemini and major Google services. AWS says Trainium2 can deliver 30–40% better price-performance than GPU-based P5e and P5en instances for generative AI. Microsoft says Maia 200 is built to improve the economics of token generation. The AI chip race now looks less like a sprint and more like a very expensive traffic jam.
Why Nvidia Still Sets The Pace
Nvidia still leads because it sells more than hardware. Its CUDA platform gives developers a software layer for using GPUs through common languages, libraries, and frameworks. Companies do not want to rebuild their software stack every time they change hardware. Nvidia also seems to understand where the market is going.
In May 2025, it unveiled NVLink Fusion, a semi-custom approach that lets partners build AI infrastructure around Nvidia’s interconnect technology. That suggests Nvidia wants to stay at the center even when customers start mixing in custom silicon. A fairly sensible strategy: if you cannot sell every chip, try to sell the roads between them.
In December 2025, Reuters reported that Google was working on “TorchTPU” with help from Meta to make TPUs feel more natural to PyTorch users and reduce switching costs away from Nvidia’s CUDA ecosystem. Competing chips can be fast, efficient, and clever, but if developers still need extra work to use them, Nvidia keeps a major advantage. In AI, hardware wins headlines, while software quietly collects rent.
Why Custom Silicon Is Suddenly Everywhere
Google says Ironwood is its first TPU designed specifically for inference and that it can scale to 9,216 chips in a superpod. Meta recently said that it plans to develop and deploy four new MTIA generations within the next two years, with an inference-first focus and a faster rhythm than normal chip cycles.
AWS is pushing Trainium2, and Microsoft is pushing Maia. They are signs that cloud giants want more control over cost, supply, and performance for high-volume AI workloads. Training made Nvidia rich. Inference is inviting new contestants onto the field.
These chips are usually designed for narrower workloads than general-purpose GPUs, but that is exactly the point. Recommendation engines, ranking systems, and large-scale inference have patterns that can be optimized.
A hyperscaler with enough scale can justify building a chip for those patterns. A smaller company usually cannot. Which brings us to Broadcom, a company that has become very important precisely because many firms want custom silicon without becoming full semiconductor manufacturers overnight.
Broadcom Becomes The Builder Behind The Builders
Broadcom may be the clearest sign that the AI chip race has expanded beyond Nvidia. In its first quarter of fiscal 2026, Broadcom said AI revenue reached $8.4 billion, up 106% year over year, driven by custom AI accelerators and AI networking. Its filings show that this business is broader than a single chip: it includes XPUs, Ethernet switching and routing silicon, NICs, optical parts, and even racks and systems. The future of “in-house” silicon, it turns out, involves a surprising amount of outside help.
The customer list is where things get interesting. OpenAI and Broadcom announced a collaboration to deploy 10 gigawatts of OpenAI-designed custom AI accelerators. Reuters then reported that Broadcom sees a path to more than $100 billion in AI chip revenue by 2027, driven by strong demand for custom-designed AI chips and networking.
There is a catch, because there is always a catch. Reuters reported in December 2025 that Broadcom warned a higher mix of lower-margin custom AI processors was putting pressure on profitability. Custom silicon can be great for customers who want lower long-term costs or tighter optimization. It is not always equally great for the supplier’s margins.
ASML Sells The Tools Behind The AI Boom
Then there is ASML, the Dutch company that keeps charging admission to the future. ASML makes the world’s only commercial EUV lithography tools, which are critical for advanced chip production.
ASML reported record 2025 net sales of €32.7 billion, and in the fourth quarter, it recognized revenue on two High-NA systems. On its own product page, ASML says the EXE High-NA EUV platform is meant for high-volume manufacturing in 2025–2026, starting at the 2 nm logic node and then moving into memory.
Any plan for faster GPUs or custom AI chips eventually hits manufacturing limits. Strong AI demand helped lift sales of advanced logic and memory tools in 2025. ASML’s High-NA machines have already processed more than 500,000 wafers and are close to mass production, although full rollout may still take a few years. The company is also expanding into advanced packaging and other new chipmaking tools.
AI hardware now depends on more than smaller transistors alone. Memory bandwidth, heat, power, packaging, and yield all matter too. The AI chip race is turning into a systems race, and ASML is well placed for that shift.
Why This Does Not End Nvidia
All of this does not mean Nvidia is about to be pushed aside.
Meta’s current roadmap says it will continue buying Nvidia and AMD chips even while developing MTIA. Google is trying to reduce the software friction around TPUs, which suggests the friction is still real. OpenAI is working with Broadcom on custom accelerators while also using chips from established suppliers. In practice, custom silicon works best when workloads are stable, scale is huge, and the owner can spend heavily for years. That is a real shift in AI infrastructure, but it is still a narrower club than the headlines sometimes imply.
Final Thoughts
So who wins the next phase of the AI chip race? Nvidia still looks like the platform leader. Broadcom is becoming the preferred builder for companies that want custom AI accelerators without turning themselves into full semiconductor houses. ASML remains the essential supplier behind the advanced-chip roadmap. The hyperscalers will keep pushing custom silicon for AI because their cloud bills are now large enough to inspire heroic levels of engineering.
The result is a more layered AI chip market: one part Nvidia, one part Broadcom, one part ASML, and one part custom silicon designed for very specific workloads. Slightly messier than the old story. Also, much more interesting.