By Jianping Jiang, Head of Product Marketing, CXL Switch, Marvell
The AI memory wall—the widening gap between the memory capacity and bandwidth AI infrastructure wants and the amount that conventional memory architectures can deliver—is accelerating at an alarming pace.
And the consequences are getting increasingly ominous for data center operators and their customers: idle XPUs, underutilized equipment, longer processing times, higher costs, and ultimately a lower return on investment. Meanwhile, memory—already second only to GPUs in datacenter semiconductor spend1—continues to soar in price.
The Marvell® StructeraTM S family of Compute Express Link (CXL) switches scale the memory wall by providing a pathway for adding terabytes of shareable memory to infrastructure and dynamically allocating bandwidth and capacity to boost utilization and application performance. CXL switches don’t just boost memory and memory capacity; they enable data center operators to use it more wisely too.
Structera S is the successor to the groundbreaking Apollo line of CXL switches developed by XConn Technologies, now part of Marvell. Structera S 20256 for PCIe Gen 5.0/CXL 2.0 (previously the XConn Apollo I) became the first commercially available CXL switch upon its release last year.
Marvell is expanding the family with Structera S 30260 for PCIe 6.0/CXL 3.x. Structera S 30260 features support for 16 or 32 CPUs or GPUs over 260 lanes with up to 48TB of shared memory and 4TB/second cumulative bandwidth. Marvell is showcasing Structera S 30260 in a live demonstration this week at OFC 2026 and plans on sampling to customers in 3Q 2026.
By Krishna Mallampati, Senior Director of Product Marketing, Data Center Switching, Marvell
Since its introduction in 2004, PCIe® has become the most popular interconnect for low-latency chip-to-chip connections. From its humble beginnings for fan-out interconnects, PCIe has been integrated into AI and cloud servers, JBOF storage systems, ADAS systems in automotive, industrial automation, PCs, and other platforms.
Scale-up AI servers—which can contain hundreds of processors spread over multiple racks—represent the next logical step for PCIe. Although far larger than today’s single chassis AI servers, scale-up servers demand the same thing from interconnect fabrics: coherent, low-latency links that enable fast, secure communication between components. PCIe’s status as a widely-used standard that evolves to meet customer demands further puts it in the forefront for scale-up.
Let’s explore the PCIe scale-up usage model and how these architectures will evolve.
PCIe Scale-up Usage Model

By Xi Wang, Senior Vice President and General Manager of the Connectivity Business Unit, Marvell
Marvell has become a founding member of the eXtra dense Pluggable Optics (XPO) Multi-Source Agreement (MSA), an industry initiative organized by Arista Networks to define a new optical transceiver form factor purpose-built for AI-scale infrastructure.
The XPO concept is designed to dramatically increase bandwidth density by enabling liquid cooling at the module level. XPO modules are substantially larger in size than octal small form factor pluggable (OFSP) modules commonly deployed in today’s data centers, but they deliver a step-function increase in performance. Each XPO module integrates 64 lanes operating at 200 Gbps, eight times more than current pluggable modules for a total of 12.8 Tbps of bandwidth per module.1
This leap in bandwidth is enabled in part by an integrated cold plate that can deliver up to 400W of cooling per module. The combination of larger modules, significantly higher lane counts, and liquid cooling delivers a four-fold increase in bandwidth density for switches across scale-up, scale-out or scale-across network architecture.
By Henry Chen, Senior Director, Optical DSP Marketing, Marvell
In AI infrastructure, every electron matters.
That is the underlying principle behind Marvell® Ara T, the industry’s first 1.6T transmit-only (TRO) optical digital signal processor (DSP) for AI and cloud interconnects. Designed for high-bandwidth, mid-length links spanning 5 to 500 meters, Ara T can reduce optical power module power consumption by more than 35%, delivering meaningful savings at scale.
Ara T extends Marvell leadership in 1.6T optics and interconnect technology and advances the company's strategy to raise infrastructure ROI and efficiency through optimized silicon.
Marvell will showcase Ara T at OFC 2026 in Los Angeles, March 17–19.
By Michael Arsenault, Director of Product Marketing for AEC DSPs, Marvell
Rack connectivity is undergoing a historic transformation. Data center operators are demanding both scale-up and scale-out connectivity that can move more data across longer distances and between more systems, while delivering unprecedented levels of energy efficiency and reliability.
To help cable providers and their customers meet these challenges, Marvell has launched the Golden Cable initiative, designed to accelerate the development of active electrical cables (AECs). AECs are a rapidly growing class of high-bandwidth, enhanced copper interconnects used to link servers, switches, NICs and other assets in the same rack or across adjacent racks (about two to nine meters).
The Golden Cable initiative delivers a validated cable architecture tested across leading platforms and built on industry-leading software, reference designs, technical data, firmware and comprehensive support. Participants can combine these assets with their own technology to develop unique AECs powered by DSPs, optimized for specific customer requirements and use cases.
To further enhance performance and ensure broad compatibility, Golden Cable AECs are rigorously tested in the Marvell Cloud Interoperability Lab. Here, cables are validated across a wide range of customized configuration scenarios involving leading XPUs, CPUs, NICs, servers, switches, optical modules and other critical infrastructure components. This process enables Marvell and its partners to validate AEC firmware before cables reach end-customers, significantly accelerating customer qualification and deployment timelines. The result is greater confidence from the first plug-in.
The Golden Cable initiative is designed to rapidly scale and empower the cable partner ecosystem, enabling Marvell to meet accelerating market demand at true hyperscale speed. By operating in close alignment with key partners, Marvell is achieving many of the benefits of near‑vertical integration, while maintaining the flexibility and scalability of a partner‑driven model.