By Preet Virk, Senior Vice President and General Manager, Photonic Fabric Business Unit
![]()
Modern AI infrastructure is built around multi-rack systems where thousands to tens of thousands of accelerators operate as a single logical compute element. As agentic AI and Mixture of Experts (MoE) models accelerate AI adoption, they are driving unprecedented scale and communication demands across data center infrastructure. These systems are connected by scale-up and scale-out networks that must deliver high bandwidth, low latency and efficient power. As these networks extend across racks, maintaining that performance becomes a primary challenge.
As AI systems grow in complexity and scale, the network becomes the backbone of the compute system. Large-scale clusters require massive XPU-to-XPU communication, driving an evolution beyond legacy protocols like PCIe® to encompass UALink™ (Ultra Accelerator Link), ESUN (Ethernet scale-up networking) and NVLink.
Meeting these requirements demands a new approach to connectivity. Marvell provides a comprehensive AI connectivity portfolio spanning scale-up, scale-out, scale-across and DCI (data center interconnect) network architectures. For scale-up networking, Marvell delivers copper and optical interconnects connecting XPUs, switches and memory. Within the rack, Marvell copper solutions provide low-latency, power-efficient short-reach connectivity, while Marvell optical interconnects enable high-performance scaling beyond the rack. This enables XPUs to operate as a more efficient, unified system as scale-up domains expand.
By Joseph Chon, Senior Director, Product Marketing, Data Center Interconnect, Marvell
MACsec is moving to the module in scale-across networks.
Media Access Control security (MACsec) is a foundational technology for protecting data in motion. It encrypts and authenticates Ethernet traffic to guard against eavesdropping, denial-of-service attacks, intrusion and other security threats while also strengthening overall data integrity. Embodied in silicon, MACsec further establishes a robust root of trust for managing encryption keys and securing the boot process.
What’s changing is where the silicon for delivering MACsec gets located.
To date, the MACsec circuitry for long-distance scale-across networks has typically been embedded in the switch ASIC, where space and silicon real estate are at an absolute premium. Embedding MACsec into the tight confines of the ASIC raises the cost of integrating the technology. It also makes infrastructure less flexible: some upgrades require taking the system offline, reducing overall capacity.
By Alua Suleimenova, Senior Sustainability Program Manager and Vienna Alexander, Marketing Content Professional, Marvell

This Earth Day, Marvell is proud to be recognized on USA Today’s America’s Climate Leaders 2026 list. This marks the second consecutive year Marvell has received this distinction, reflecting continued progress in reducing the company’s carbon footprint, increasing renewable energy procurement, and advancing toward sustainability targets.
This recognition builds on several other recent honors. Over the past year, Marvell has been acknowledged for its strong sustainability and ethics practices, including being named one of Ethisphere’s Most Ethical Companies, earning recognition as one of America’s Most Responsible Companies, and achieving CDP Sustainability Supplier Engagement Leader status. Together, these accolades underscore the commitment Marvell makes to responsible business practices across its operations, supply chain and products.
By Todd Rope, Vice President of Software Engineering at Marvell
Optical circuit switching (OCS) has become one of the fastest growing segments in networking with revenue expected to exceed $3.5 billion by 2029, more than 2x over 2025.1 The unique architecture of OCS systems, however, also mean that developers and data center operators need to ensure that these systems can seamlessly integrate into data infrastructure and interoperate with existing product lines.
Lumentum and Marvell took a significant step toward that goal with a live demonstration at OFC 2026 that combined the R300 OCS system from Lumentum with different classes of modules powered by Marvell optical DSPs. The modules included inside-the-data center modules powered by the Marvell® Ara 1.6T (5m-2km interconnects), coherent lite modules with 1.6T Marvell Aquila for campus-size connections (2 to 20km) and long-range COLORZ® 800T ZR/ZR+ modules for 10-1000km data center interconnects.
Marvell RELIANT™, a new software platform for analyzing equipment performance and optimizing networks in real-time, was also used to monitor data transmission, power consumption, bit error rate and other metrics in the demo. Michael DeMerchant, senior director of product line management at Lumentum and I walk you through more of what RELIANT can accomplish with OCS in the video.
By Vienna Alexander, Marketing Content Professional, Marvell
![]()
At the Heterogeneous Composable and Disaggregated Systems (HCDS) Workshop, co-located with Architectural Support for Programming Languages and Operating Systems (ASPLOS), Senior Staff Engineer Jing Ding won Best Paper for her research on the Marvell® Photonic Fabric™ Technology Platform.
There is a critical mismatch between the capacity and bandwidth available across memory tiers and the demands of large-scale LLM inference, revealed through characterizing KV cache retrieval efficiency. In fact, across LLaMA3-8B to 405B on NVIDIA A100/H200 systems, retrieving KV cache from host memory achieves up to 100x speedup over GPU re-computation for contexts up to 4M tokens, but host DRAM capacity cannot accommodate the KV demands of long-context, multi-tenant and multi-turn workloads.
While CXL-enabled memory pooling could be applied in this capacity, it faces fundamental electrical interconnect limitations, namely rack-scale distance constraints, switch contention under multi-host workloads, and power-thermal scaling challenges. By leveraging the Photonic Fabric™ optical interconnect technology platform to break reach limitations, along with CXL as a host communication protocol, Marvell enables a unique pod-scale memory sharing appliance that can enable up to 16 servers across multiple racks to dynamically share up to 32 TB of memory capacity.
Copyright © 2026 Marvell, All rights reserved.