By Krishna Mallampati, Senior Director of Product Marketing, Data Center Switching, Marvell
Since its introduction in 2004, PCIe® has become the most popular interconnect for low-latency chip-to-chip connections. From its humble beginnings for fan-out interconnects, PCIe has been integrated into AI and cloud servers, JBOF storage systems, ADAS systems in automotive, industrial automation, PCs, and other platforms.
Scale-up AI servers—which can contain hundreds of processors spread over multiple racks—represent the next logical step for PCIe. Although far larger than today’s single chassis AI servers, scale-up servers demand the same thing from interconnect fabrics: coherent, low-latency links that enable fast, secure communication between components. PCIe’s status as a widely-used standard that evolves to meet customer demands further puts it in the forefront for scale-up.
Let’s explore the PCIe scale-up usage model and how these architectures will evolve.
PCIe Scale-up Usage Model

By Jianping Jiang, Head of Product Marketing, CXL Switch, Marvell
The AI memory wall—the widening gap between the memory capacity and bandwidth AI infrastructure wants and the amount that conventional memory architectures can deliver—is accelerating at an alarming pace.
And the consequences are getting increasingly ominous for data center operators and their customers: idle XPUs, underutilized equipment, longer processing times, higher costs, and ultimately a lower return on investment. Meanwhile, memory—already second only to GPUs in datacenter semiconductor spend1—continues to soar in price.
The Marvell® StructeraTM S family of Compute Express Link (CXL) switches scale the memory wall by providing a pathway for adding terabytes of shareable memory to infrastructure and dynamically allocating bandwidth and capacity to boost utilization and application performance. CXL switches don’t just boost memory and memory capacity; they enable data center operators to use it more wisely too.
Structera S is the successor to the groundbreaking Apollo line of CXL switches developed by XConn Technologies, now part of Marvell. Structera S 20256 for PCIe Gen 5.0/CXL 2.0 (previously the XConn Apollo I) became the first commercially available CXL switch upon its release last year.
Marvell is expanding the family with Structera S 30260 for PCIe 6.0/CXL 3.x. Structera S 30260 features support for 16 or 32 CPUs or GPUs over 260 lanes with up to 48TB of shared memory and 4TB/second cumulative bandwidth. Marvell is showcasing Structera S 30260 in a live demonstration this week at OFC 2026 and plans on sampling to customers in 3Q 2026.
By Henry Chen, Senior Director, Optical DSP Marketing, Marvell
In AI infrastructure, every electron matters.
That is the underlying principle behind Marvell® Ara T, the industry’s first 1.6T transmit-only (TRO) optical digital signal processor (DSP) for AI and cloud interconnects. Designed for high-bandwidth, mid-length links spanning 5 to 500 meters, Ara T can reduce optical power module power consumption by more than 35%, delivering meaningful savings at scale.
Ara T extends Marvell leadership in 1.6T optics and interconnect technology and advances the company's strategy to raise infrastructure ROI and efficiency through optimized silicon.
Marvell will showcase Ara T at OFC 2026 in Los Angeles, March 17–19.
By Xi Wang, Senior Vice President and General Manager of the Connectivity Business Unit, Marvell
Marvell has become a founding member of the eXtra dense Pluggable Optics (XPO) Multi-Source Agreement (MSA), an industry initiative organized by Arista Networks to define a new optical transceiver form factor purpose-built for AI-scale infrastructure.
The XPO concept is designed to dramatically increase bandwidth density by enabling liquid cooling at the module level. XPO modules are substantially larger in size than octal small form factor pluggable (OFSP) modules commonly deployed in today’s data centers, but they deliver a step-function increase in performance. Each XPO module integrates 64 lanes operating at 200 Gbps, eight times more than current pluggable modules for a total of 12.8 Tbps of bandwidth per module.1
This leap in bandwidth is enabled in part by an integrated cold plate that can deliver up to 400W of cooling per module. The combination of larger modules, significantly higher lane counts, and liquid cooling delivers a four-fold increase in bandwidth density for switches across scale-up, scale-out or scale-across network architecture.
By Vienna Alexander, Marketing Content Professional, Marvell

At DesignCon 2026, Senior Staff Engineer Sonam Sadhukhan of Marvell was honored in the 40 Under 40 program and selected as one of the top six speakers to participate in a panel session.
The 40 Under 40 recognition, launched in 2024, provides an opportunity for innovative engineers to further their career through exclusive access to DesignCon education, sessions, and high-level networking opportunities. These emerging leaders are chosen through an open nomination and review process.
Copyright © 2026 Marvell, All rights reserved.