Marvell Blogs

Marvell Newsroom

Posts Tagged 'Optical DSPs'

  • March 17, 2026

    Structera S: Scaling the AI Memory Wall with CXL Switching

    By Jianping Jiang, Head of Product Marketing, CXL Switch, Marvell

    The AI memory wall—the widening gap between the memory capacity and bandwidth AI infrastructure wants and the amount that conventional memory architectures can deliver—is accelerating at an alarming pace.

    And the consequences are getting increasingly ominous for data center operators and their customers: idle XPUs, underutilized equipment, longer processing times, higher costs, and ultimately a lower return on investment. Meanwhile, memory—already second only to GPUs in datacenter semiconductor spend1—continues to soar in price.

    The Marvell® StructeraTM S family of Compute Express Link (CXL) switches scale the memory wall by providing a pathway for adding terabytes of shareable memory to infrastructure and dynamically allocating bandwidth and capacity to boost utilization and application performance. CXL switches don’t just boost memory and memory capacity; they enable data center operators to use it more wisely too.

    Structera S is the successor to the groundbreaking Apollo line of CXL switches developed by XConn Technologies, now part of Marvell. Structera S 20256 for PCIe Gen 5.0/CXL 2.0 (previously the XConn Apollo I) became the first commercially available CXL switch upon its release last year.

    Marvell is expanding the family with Structera S 30260 for PCIe 6.0/CXL 3.x. Structera S 30260 features support for 16 or 32 CPUs or GPUs over 260 lanes with up to 48TB of shared memory and 4TB/second cumulative bandwidth. Marvell is showcasing Structera S 30260 in a live demonstration this week at OFC 2026 and plans on sampling to customers in 3Q 2026.

  • March 17, 2026

    The Next Step for PCIe: Scale-up Fabrics for AI

    By Krishna Mallampati, Senior Director of Product Marketing, Data Center Switching, Marvell

    Since its introduction in 2004, PCIe® has become the most popular interconnect for low-latency chip-to-chip connections. From its humble beginnings for fan-out interconnects, PCIe has been integrated into AI and cloud servers, JBOF storage systems, ADAS systems in automotive, industrial automation, PCs, and other platforms.

    Scale-up AI servers—which can contain hundreds of processors spread over multiple racks—represent the next logical step for PCIe. Although far larger than today’s single chassis AI servers, scale-up servers demand the same thing from interconnect fabrics: coherent, low-latency links that enable fast, secure communication between components. PCIe’s status as a widely-used standard that evolves to meet customer demands further puts it in the forefront for scale-up.

    Let’s explore the PCIe scale-up usage model and how these architectures will evolve.

    PCIe Scale-up Usage Model

    PCIe Scale-up Usage Model

  • March 16, 2026

    Marvell Joins XPO MSA To Accelerate Innovation in AI Optical Modules

    By Xi Wang, Senior Vice President and General Manager of the Connectivity Business Unit, Marvell

    Marvell has become a founding member of the eXtra dense Pluggable Optics (XPO) Multi-Source Agreement (MSA), an industry initiative organized by Arista Networks to define a new optical transceiver form factor purpose-built for AI-scale infrastructure.

    The XPO concept is designed to dramatically increase bandwidth density by enabling liquid cooling at the module level. XPO modules are substantially larger in size than octal small form factor pluggable (OFSP) modules commonly deployed in today’s data centers, but they deliver a step-function increase in performance. Each XPO module integrates 64 lanes operating at 200 Gbps, eight times more than current pluggable modules for a total of 12.8 Tbps of bandwidth per module.1

    This leap in bandwidth is enabled in part by an integrated cold plate that can deliver up to 400W of cooling per module. The combination of larger modules, significantly higher lane counts, and liquid cooling delivers a four-fold increase in bandwidth density for switches across scale-up, scale-out or scale-across network architecture.

  • December 11, 2024

    O-Band Optics: A New Market for Optimizing the Cloud

    By Michael Kanellos, Head of Influencer Relations, Marvell

    Data infrastructure needs more: more capacity, speed, efficiency, bandwidth and, ultimately, more data centers. The number of data centers owned by the top four cloud operators has grown by 73% since 20201, while total worldwide data center capacity is expected to double to 79 megawatts (MW) in the near future2.

    Aquila, the industry’s first O-band coherent DSP, marks a new chapter in optical technology. O-band optics lower the power consumption and complexity of optical modules for links ranging from two to 20 kilometers. O-band modules are longer in reach than PAM4-based optical modules used inside data centers and shorter than C-band and L-band coherent modules. They provide users with an optimized solution for the growing number of data center campuses emerging to manage the expected AI data traffic.

    Take a deep dive into our O-band technology with Xi Wang’s blog, O-Band Coherent, An Idea Whose Time is (Nearly) Here, originally published in March, below: 

    O-Band Coherent: An Idea Whose Time Is (Nearly) Here 
    By Xi Wang, Vice President of Product Marketing of Optical Connectivity, Marvell

    Over the last 20 years, data rates for optical technology have climbed 1000x while power per bit has declined by 100x, a stunning trajectory that in many ways paved the way for the cloud, mobile Internet and streaming media.

    AI represents the next inflection point in bandwidth demand. Servers powered by AI accelerators and GPUs have far greater bandwidth needs than typical cloud servers: seven high-end GPUs alone can max out a switch that ordinarily can handle 500 cloud two-processor servers.  Just as important, demand for AI services, and higher-value AI services such as medical imaging or predictive maintenance, will further drive the need for more bandwidth. The AI market alone is expected to reach $407 billion by 2027.

  • June 12, 2023

    AI and the Tectonic Shift Coming to Data Infrastructure

    By Michael Kanellos, Head of Influencer Relations, Marvell

    AI’s growth is unprecedented from any angle you look at it. The size of large training models is growing 10x per year. ChatGPT’s 173 million plus users are turning to the website an estimated 60 million times a day (compared to zero the year before.). And daily, people are coming up with new applications and use cases. 

    As a result, cloud service providers and others will have to transform their infrastructures in similarly dramatic ways to keep up, says Chris Koopmans, Chief Operations Officer at Marvell in conversation with Futurum’s Daniel Newman during the Six Five Summit on June 8, 2023. 

    “We are at the beginning of at least a decade-long trend and a tectonic shift in how data centers are architected and how data centers are built,” he said.  

    The transformation is already underway. AI training, and a growing percentage of cloud-based inference, has already shifted from running on two-socket servers based around general processors to systems containing eight more GPUs or TPUs optimized to solve a smaller set of problems more quickly and efficiently.  

Archives