Nvidia Invests $2B in Marvell for AI Networking Tie-Up

Paul Jackson

March 31, 2026

Key Points

  • Nvidia (NVDA) invested $2 billion in Marvell Technology (MRVL) tied to an AI partnership.
  • The companies will collaborate on optical interconnects and silicon photonics for data center AI networking.
  • Marvell will provide custom chips compatible with NVLink Fusion, while Nvidia supplies CPUs, NICs, and interconnect technologies.

What Happened

Nvidia (NVDA) invested $2 billion in Marvell Technology (MRVL) as the two companies launch a partnership aimed at making it easier for customers to deploy custom AI chips alongside Nvidia’s networking gear and central processors.

In market reaction on Tuesday, Marvell shares rose about 7%, while Nvidia shares were up about 2.7%.

Why Nvidia Is Leaning Into Custom Silicon

The backdrop is a widening range of AI buildouts inside the data center. Some large buyers are choosing custom processors rather than relying only on Nvidia’s high-end processors, and Nvidia’s move signals it wants to stay central even as the compute mix becomes more diverse.

This investment and partnership positions Nvidia closer to workloads that may not be powered exclusively by its own processors, while still keeping the broader system anchored to Nvidia’s networking and platform technologies.

Key elements of the strategic logic include:

  • Keeping Nvidia embedded in AI infrastructure even when customers choose semi-custom silicon.
  • Using networking and interconnect standards as the “glue” that ties diverse AI systems together.
  • Targeting data center bottlenecks such as bandwidth and power efficiency.

What The Partnership Focuses On

The companies plan to work on advanced networking solutions for AI, with a focus on optical interconnects and silicon photonics—technology aimed at enabling high-speed, energy-efficient data transmission inside large AI systems.

The collaboration is designed around interoperability: helping custom chips designed with Marvell work smoothly with Nvidia’s networking and compute building blocks inside AI data centers.

The technical areas called out include:

  • Optical interconnects to move more data with lower energy use.
  • Silicon photonics to support high-speed, energy-efficient transmission.
  • Scaling AI systems where bandwidth and power are key constraints.

Who Provides What

The companies outlined complementary roles. Marvell will contribute custom chips and networking solutions that are compatible with Nvidia’s NVLink FusionNvidia will supply supporting technologies, including central processing unitsnetwork interface cards, and interconnects.

At a high level, the split looks like this:

  • Marvell: custom chips and networking solutions compatible with NVLink Fusion.
  • NvidiaCPUsNICs, and system interconnect technologies to support deployments.
  • Joint focus: networking solutions built for AI data center scale.

Demand Tailwinds And Marvell’s Longer-Term Goal

AI infrastructure spending remains a major demand driver for server and networking chips. Alphabet (GOOGL) and Meta (META) are expected to spend at least $630 billion to build AI infrastructure this year, supporting demand across compute, memory, and networking supply chains.

Separately, Marvell has said it expects revenue to grow nearly 40% and approach $15 billion in fiscal 2028—a target investors will continue to weigh against execution and the pace of data center buildouts.

What investors will watch next: how quickly solutions tied to NVLink Fusion show up in customer deployments, and whether optical interconnect initiatives translate into meaningful product momentum.

WSA Take

This is a clear signal that Nvidia is working to stay indispensable even as more customers experiment with custom AI processors. By pairing a $2 billion investment with a networking-focused partnership, Nvidia is reinforcing the idea that data center AI is as much about interconnects and system design as it is about raw compute. For U.S. investors, the near-term read-through is that both companies are leaning into the highest-growth part of AI infrastructure: high-bandwidth, power-aware networking. The key question is whether this cooperation turns into broad customer adoption that strengthens platform stickiness across the AI data center stack.

Explore More Stories in AI

Back to WallStAccess Homepage


Disclaimer

WallStAccess is a financial media platform providing market commentary and analysis for informational and educational purposes only. This content does not constitute investment advice, a recommendation, or an offer to buy or sell any securities. Readers should conduct their own research or consult a licensed financial professional before making investment decisions.

Author

Paul Jackson

RELATED ARTICLES

Subscribe