The Patent Landscape: Who Is Filing What, and Why It Matters
The AI data center networking sector is experiencing explosive patent activity, with 2025 alone accounting for 43% of all AI networking patents filed between 2018 and 2026 — a concentration that reflects the urgency of the hyperscaler infrastructure buildout. Yet the raw filing numbers conceal a critical strategic divergence: Intel and NVIDIA explicitly brand their interconnect patents with “AI” keywords, while Broadcom and Marvell focus on underlying infrastructure technologies — Ethernet, SerDes, switching ASICs — that enable AI networking without AI-specific branding.
Among the 2022–2025 AI networking patent cohort, Intel leads with 47 patents (31%), followed by NVIDIA with 16 patents (11%). Broadcom’s 74 patents in Ethernet switching and SerDes and Marvell’s 80 network switching patents appear under infrastructure-specific queries rather than “AI chip” searches — a reflection of strategic positioning, not lower output. As noted by WIPO, patent classification choices signal how companies want their technology perceived by both standards bodies and potential licensees.
An important caveat: 2025–2026 filing counts are materially underestimated due to the standard 18-month patent publication delay. Actual competitive intensity in the most recent period is likely higher than the visible data suggests.
In the 2022–2025 AI networking patent cohort, Intel filed 47 patents (31% of the cohort), NVIDIA filed 16 patents (11%), Broadcom filed 74 patents under Ethernet switching and SerDes categories, and Marvell filed 80 patents focused on network switching — with Broadcom and Marvell classifying their filings under infrastructure keywords rather than “AI chip” branding.
Broadcom’s Technology Roadmap: Bandwidth Doubling and CXL-over-Ethernet
Broadcom’s core strategy is to maintain Ethernet switching ASIC dominance by doubling bandwidth every two years and extending its lead into composable AI infrastructure through CXL-over-Ethernet innovation. The Tomahawk series has executed this cadence precisely: Tomahawk 3 at 12.8Tbps (2018), Tomahawk 4 at 25.6Tbps (2020), Tomahawk 5 at 51.2Tbps (March 2023), and Tomahawk 6 at 102.4Tbps (June 2025) — the world’s first 102.4Tbps switch, delivering 64×1.6TbE ports and enabling million-GPU clusters.
Broadcom’s patent US12360937B2 (July 2025) encapsulates CXL protocol within Ethernet frames, preserving cache coherency semantics over standard Ethernet PHYs. This enables memory pooling and resource disaggregation across Ethernet fabrics without requiring dedicated CXL switches, maintaining sub-100ns memory access latency at rack scale.
The CXL-over-Ethernet patent represents a strategic wedge into composable infrastructure. By enabling CXL.mem, CXL.cache, and CXL.io over existing Ethernet PHYs, Broadcom allows hyperscalers to dynamically allocate memory, compute, and accelerator resources across Tomahawk 5/6 switching fabrics — without a forklift upgrade to dedicated CXL switching hardware. This is a significant architectural advantage given the installed base of Broadcom Ethernet switches in hyperscaler data centers.
“2025 represents 43% of all AI networking patents filed from 2018 to 2026 — and Broadcom’s Tomahawk 6 is the world’s first 102.4Tbps switch, enabling million-GPU clusters.”
Broadcom’s SerDes leadership reinforces this position. Patent US12401346B2 (August 2025) describes a DSP-based SerDes architecture supporting 50G/100G/200G/400G/800G PAM4 signaling with advanced equalization for long-reach optical links of 500 metres or more. This multi-rate capability is what allows Tomahawk 6 to deliver 64×1.6TbE ports with competitive power efficiency — a prerequisite for the 400Tbps+ switching capacity targeted by Broadcom’s projected Tomahawk 7 (204.8Tbps, 2027).
Broadcom’s custom AI ASIC business adds a second revenue pillar. Serving three hyperscalers — Google (TPU), Meta (MTIA), and ByteDance — this segment generated an estimated $12 billion in revenue in 2025. Custom AI ASICs command approximately 60% gross margins compared to roughly 40% for merchant switch ASICs, making this a structurally attractive business even at lower volumes than merchant silicon.
Explore Broadcom and Marvell’s full patent portfolios with AI-powered analysis in PatSnap Eureka.
Analyse Patents with PatSnap Eureka →Broadcom’s Tomahawk 6, launched in June 2025, is the world’s first 102.4Tbps Ethernet switch, offering 64×1.6TbE ports. Broadcom’s Tomahawk series has doubled switching bandwidth every two years since Tomahawk 3 (12.8Tbps) in 2018, with a projected Tomahawk 7 roadmap target of 204.8Tbps in 2027.
Marvell’s Challenger Strategy: Custom ASICs, UALink, and NVLink Fusion
Marvell’s answer to Broadcom’s switching dominance is a three-pronged strategy: acquire switching capability through Innovium, win custom AI ASIC design-ins at hyperscalers, and simultaneously partner with both NVIDIA and NVIDIA’s rivals to become indispensable regardless of which interconnect standard prevails. The $1.1 billion acquisition of Innovium in August 2021 delivered the Teralynx switch ASIC portfolio — a direct competitor to Tomahawk — with Teralynx 9 targeting 51.2Tbps parity with Tomahawk 5 and future generations roadmapped to 102.4Tbps+.
Marvell’s Teralynx differentiation rests on three claims versus Broadcom: enhanced P4-based programmability for custom packet processing, advanced in-band network telemetry (INT) for AI workload optimisation, and claimed 20%+ lower power at equivalent bandwidth. These are meaningful advantages for hyperscalers building custom AI training fabrics where per-watt efficiency directly impacts total cost of ownership.
In May 2025, Marvell announced an NVLink Fusion partnership with NVIDIA enabling hyperscalers to build NVIDIA-compatible accelerators with NVLink PHY and switch logic integrated into custom silicon. Just one month later, in June 2025, Marvell announced a UALink scale-up solution for the open multi-vendor GPU interconnect consortium backed by AMD, Google, Intel, Meta, and Microsoft. Marvell is the only major silicon vendor positioned on both sides of the interconnect standards war.
On the custom AI ASIC front, Marvell’s platform-based design methodology — using 5nm/3nm process technology via TSMC, chiplet architecture, 112G/224G SerDes, and HBM3/HBM3E memory controllers — serves Amazon AWS (Trainium/Inferentia) and Microsoft Azure (Maia AI accelerator). Marvell’s estimated $1.5 billion in custom AI ASIC revenue in 2025 is a fraction of Broadcom’s $12 billion, but its 60% year-over-year growth rate from a smaller base indicates accelerating momentum.
Marvell’s 80 network switching patents include notable filings on PCIe/CXL integration: US11386027B2 integrates PCIe endpoint logic directly into an Ethernet switch for low-latency data transfers, while US11005778B1 enables lossless Ethernet via priority flow control — a prerequisite for reliable AI training all-reduce operations. The automotive Ethernet expertise embedded in patents like US11479263B2 also positions Marvell for autonomous driving AI accelerator markets that Broadcom does not currently address.
Map Marvell’s full ASIC and interconnect patent strategy with PatSnap Eureka’s competitive intelligence tools.
Explore Full Patent Data in PatSnap Eureka →Marvell acquired Innovium for $1.1 billion in August 2021, adding the Teralynx switch ASIC portfolio. In May 2025, Marvell partnered with NVIDIA on NVLink Fusion, and in June 2025 launched a UALink scale-up solution for the open multi-vendor GPU interconnect consortium — positioning Marvell on both sides of the proprietary vs. open interconnect standards debate.
Head-to-Head: Market Share, Revenue, and Route Differentiation
Broadcom holds approximately 80% of the Ethernet switching ASIC market in 2025 against Marvell’s roughly 10% — a gap that the Innovium acquisition has not yet materially closed in switching, though Marvell is gaining ground in custom silicon. Broadcom’s custom AI ASIC revenue is estimated at $12 billion versus Marvell’s $1.5 billion, but Marvell’s 60% year-over-year growth rate exceeds Broadcom’s 40% from a much larger base.
The route differentiation matrix reveals clear asymmetries. Broadcom leads on switching bandwidth (Tomahawk 6 first to 102.4Tbps), CXL-over-Ethernet innovation (US12360937B2), and DSP-based SerDes performance (US12401346B2). Marvell leads on custom ASIC platform agility, ecosystem openness (UALink and NVLink Fusion simultaneously), enhanced P4 programmability from Innovium heritage, and advanced in-band network telemetry. Both companies are active in PCIe/CXL standards bodies, representing a point of competitive parity.
The structural dynamic that most favours Broadcom is what the data describes as winner-take-most: hyperscalers standardise on one switching platform per generation, and late entrants face steep adoption barriers driven by software ecosystem lock-in, supply chain commitments, and operational familiarity. Marvell’s share gains in custom silicon are real, but its path to closing the Ethernet switching gap remains slow.
Intel and NVIDIA: Ecosystem Roles and Adjacent Competitive Pressure
Intel and NVIDIA are not direct competitors in merchant Ethernet switch ASICs, but their technology choices shape the arena in which Broadcom and Marvell compete. Intel’s 102 CXL/PCIe patents — and its role as inventor and primary contributor to the Compute Express Link specification — make it the standards kingmaker. Intel’s Infrastructure Processing Unit (IPU) is complementary to switch ASICs rather than competitive, and Intel’s Xeon CPU integration drives PCIe Gen5/Gen6 and CXL adoption across the data center, enabling the CXL-over-Ethernet innovations that Broadcom is now commercialising. As documented by the IEEE, CXL’s cache-coherent memory semantics represent a fundamental shift in how compute and memory resources are disaggregated at rack scale.
NVIDIA’s position is more directly disruptive. Its 1,065 NVLink and networking patents and NVLink’s 900GB/s bidirectional GPU-to-GPU bandwidth (versus 800GbE Ethernet) mean that within a 256-GPU pod, NVIDIA’s NVSwitch handles all-to-all GPU connectivity without touching a Broadcom or Marvell Ethernet switch. Broadcom’s Tomahawk 5/6 dominates pod-to-pod and rack-to-rack connectivity at 51.2Tbps+ — a complementary rather than competitive position at the cluster scale. Marvell’s NVLink Fusion partnership, however, enables hyperscalers to integrate NVLink into custom accelerators, potentially reducing NVIDIA’s vertical integration advantage over time. According to Next Platform, this represents the “compute engine independence wave” reshaping hyperscaler silicon strategy.
NVIDIA’s NVLink provides 900GB/s bidirectional GPU-to-GPU bandwidth and dominates intra-cluster communication within 256-GPU pods via NVSwitch, bypassing Ethernet switching entirely. Broadcom’s Tomahawk 5 and Tomahawk 6 dominate inter-cluster pod-to-pod and rack-to-rack connectivity at 51.2Tbps and 102.4Tbps respectively.
The BlueField DPU further complicates the landscape: it integrates Ethernet switching, NVLink, and ARM processing cores into a single device, giving NVIDIA a foothold in the SmartNIC/DPU segment that Intel’s IPU also targets. Neither directly displaces Broadcom’s top-of-rack switching position, but both reduce the addressable market for pure-play switch ASICs over time.
R&D Outlook to 2026 and Beyond: White Spaces and Strategic Risks
The sector is converging toward CXL-enabled composable infrastructure, where memory, compute, and accelerator resources are dynamically allocated across Ethernet or CXL fabrics. Broadcom’s roadmap targets Tomahawk 7 at 204.8Tbps (128×1.6TbE) in 2027, enhanced CXL 3.0 features including memory pooling and peer-to-peer fabric management, and co-packaged optics (CPO) integration of silicon photonics directly into the switch ASIC to eliminate the electrical SerDes bottleneck. Marvell’s priorities include Teralynx 10 at 102.4Tbps parity with Tomahawk 6, UALink 2.0 specification development, custom AI ASIC platform v2 on 3nm with HBM4, and autonomous driving AI accelerator expansion leveraging its automotive Ethernet expertise.
CXL-native switch ASICs — dedicated CXL switching rather than CXL-over-Ethernet — remain underexplored. They would enable native CXL.mem and CXL.cache with sub-50ns latency, compared to CXL-over-Ethernet’s approximately 100ns. This requires CXL 3.0 fabric management expertise and tight integration with CPU and accelerator vendors. A new entrant or Intel could capture this white space for latency-critical memory pooling use cases.
Five risks could materially alter this competitive landscape. First, the 18-month patent publication delay means 2025–2026 competitive intensity is higher than currently visible data suggests. Second, if hyperscalers — Google, Meta, Microsoft — develop in-house switch ASICs, both Broadcom and Marvell face disintermediation risk. Third, the outcome of the open UALink versus proprietary NVLink standards contest will determine whether Marvell’s ecosystem strategy succeeds. Fourth, co-packaged optics commercialisation delays driven by yield, cost, and ecosystem readiness could slow bandwidth scaling beyond 102.4Tbps. Fifth, US export controls on advanced AI chips and networking ASICs limit the addressable market in China for both Broadcom and Marvell.
For R&D decision-makers, the data supports three planning assumptions: Broadcom remains the safe bet for Ethernet switching ASICs with an 80% market share and a demonstrated two-year technology lead; Marvell is the challenger to monitor for custom silicon share gains, particularly if UALink achieves multi-vendor GPU interoperability; and the CXL-native switch ASIC white space represents an uncontested opportunity for a new entrant or Intel. As WIPO‘s global innovation tracking confirms, the semiconductor interconnect space is among the fastest-growing patent domains globally — and the pace is accelerating.