Marvell Blogs

Marvell Newsroom

Latest Marvell Blog Articles

  • March 19, 2026

    Marvell RELIANT: Real-time Insight for Optimizing AI Fabrics

    By Todd Rope, Vice President of Software Engineering, Marvell, and Wesley Yeh, UI Architect, Marvell

    Interconnects have become one of the fastest growing markets in AI as hyperscalers race to expand their infrastructure. Annual port shipments and revenue for high-speed cables (100G and above) are expected to grow by 42% and 39% through 2030, respectively, reaching a cumulative 1.4 billion units, according to research firm LightCounting.1 That surpasses the unit and revenue growth rates for GPUs, CPUs, memory devices and other major components.2

    As data center interconnects extend in reach and increase in speed, their impact on total cost of ownership and return on investment will only intensify. Achieving optimal performance at scale requires live visibility into interconnect fabrics.

    Marvell RELIANT (Reliable Link Analytics and Intelligence) is a new software platform that improves network health through real-time intelligence across scale-up, scale-out, and scale-across networks. Tapping into live operational data at the silicon level, RELIANT enables data center operators to visualize and monitor interconnects, conduct diagnostics and proactively prevent link flaps and other problems that can lead to downtime.

  • March 18, 2026

    Marvell Earns World’s Most Ethical Companies Designation

    By Vienna Alexander, Marketing Content Professional, Marvell

    Marvell Earns Worlds Most Ethical Companies Designation

    Marvell was named as one of the World’s Most Ethical Companies® for 2026 by Ethisphere, a global leader in defining and advancing the standards of ethical business practices. This prestigious recognition honors the commitment of Marvell to business integrity through robust ethics, compliance, sustainability and governance programs.

    Marvell is one of only 5 honorees in the semiconductor industry. In this year’s list, 138 honorees across 17 countries and 40 industries were recognized, including 19 first-time recipients.

  • March 17, 2026

    Structera S: Scaling the AI Memory Wall with CXL Switching

    By Jianping Jiang, Head of Product Marketing, CXL Switch, Marvell

    The AI memory wall—the widening gap between the memory capacity and bandwidth AI infrastructure wants and the amount that conventional memory architectures can deliver—is accelerating at an alarming pace.

    And the consequences are getting increasingly ominous for data center operators and their customers: idle XPUs, underutilized equipment, longer processing times, higher costs, and ultimately a lower return on investment. Meanwhile, memory—already second only to GPUs in datacenter semiconductor spend1—continues to soar in price.

    The Marvell® StructeraTM S family of Compute Express Link (CXL) switches scale the memory wall by providing a pathway for adding terabytes of shareable memory to infrastructure and dynamically allocating bandwidth and capacity to boost utilization and application performance. CXL switches don’t just boost memory and memory capacity; they enable data center operators to use it more wisely too.

    Structera S is the successor to the groundbreaking Apollo line of CXL switches developed by XConn Technologies, now part of Marvell. Structera S 20256 for PCIe Gen 5.0/CXL 2.0 (previously the XConn Apollo I) became the first commercially available CXL switch upon its release last year.

    Marvell is expanding the family with Structera S 30260 for PCIe 6.0/CXL 3.x. Structera S 30260 features support for 16 or 32 CPUs or GPUs over 260 lanes with up to 48TB of shared memory and 4TB/second cumulative bandwidth. Marvell is showcasing Structera S 30260 in a live demonstration this week at OFC 2026 and plans on sampling to customers in 3Q 2026.

  • March 17, 2026

    The Next Step for PCIe: Scale-up Fabrics for AI

    By Krishna Mallampati, Senior Director of Product Marketing, Data Center Switching, Marvell

    Since its introduction in 2004, PCIe® has become the most popular interconnect for low-latency chip-to-chip connections. From its humble beginnings for fan-out interconnects, PCIe has been integrated into AI and cloud servers, JBOF storage systems, ADAS systems in automotive, industrial automation, PCs, and other platforms.

    Scale-up AI servers—which can contain hundreds of processors spread over multiple racks—represent the next logical step for PCIe. Although far larger than today’s single chassis AI servers, scale-up servers demand the same thing from interconnect fabrics: coherent, low-latency links that enable fast, secure communication between components. PCIe’s status as a widely-used standard that evolves to meet customer demands further puts it in the forefront for scale-up.

    Let’s explore the PCIe scale-up usage model and how these architectures will evolve.

    PCIe Scale-up Usage Model

    PCIe Scale-up Usage Model

  • March 17, 2026

    Marvell Honored for 1.6T Silicon Photonics Light Engine and ACC Linear Equalizers in Lightwave Innovation Reviews 2026

    By Vienna Alexander, Marketing Content Professional, Marvell

    Marvell Honored for 1.6T Silicon Photonics Light Engine and ACC Linear Equalizers in Lightwave Innovation Reviews 2026

    In the 13th annual Lightwave Innovation Reviews, Marvell received awards for two of its optical connectivity products: the active copper cable (ACC) linear equalizers and 1.6T silicon photonics light engine. Both products received a 4.0 outstanding honoree status on the 5.0 scale, which is defined as an excellent product with technical features and performance that provide clear, substantial benefits.

    An esteemed panel of independent judges evaluated optical communications and broadband designs to showcase the most innovative products, technologies, and programs that have a significant impact on the semiconductor industry.

Archives