We’re Building the Future of Data Infrastructure

Latest Marvell Blog Articles

  • September 25, 2024

    Marvell COLORZ 800 Named Most Innovative Product at ECOC 2024

    By Michael Kanellos, Head of Influencer Relations, Marvell

    With AI computing and cloud data centers requiring unprecedented levels of performance and power, Marvell is leading the way with transformative optical interconnect solutions for accelerated infrastructure to meet the rising demand for network bandwidth.

    At the ECOC 2024 Exhibition Industry Awards event, Marvell received the Most Innovative Pluggable Transceiver/Co-Packaged Module Award for the Marvell® COLORZ® 800 family. Launched in 2020 for ECOC’s 25th anniversary, the ECOC Exhibition Industry Awards spotlight innovation in optical communications, transport, and photonic technologies. This recognition highlights the company’s innovations in ZR/ZR+ technology for accelerated infrastructure and demonstrates its critical role in driving cloud and AI workloads.

    Marvell COLORZ 800 Named Most Innovative Product at ECOC 2024

  • September 22, 2024

    Five Things to Know About the Future of Long Distance Optics

    By Michael Kanellos, Head of Influencer Relations, Marvell

    Coherent optical digital signal processors (DSPs) are the long-haul truckers of the communications world. The chips are essential ingredients in the 600+ subsea Internet cables that crisscross the oceans (see map here) and the extended geographic links weaving together telecommunications networks and clouds.

    One of the most critical trends for long-distancer communications has been the shift from large, rack-scale transport equipment boxes running on embedded DSPs often from the same vendor to pluggable modules based on standardized form factors running DSPs from silicon suppliers tuned to the power limits of modules.

    With the advent of 800G ZR/ZR+ modules, the market arrives at another turning point. Here’s what you need to know. 


    It’s the Magic of Modularity

    PCs, smartphones, solar panels and other technologies that experienced rapid adoption had one thing in common: general agreement on the key ingredients. By building products around select components, accepted standards and modular form factors, an ecosystem of suppliers sprouted. And for customers that meant fewer shortages, lower prices and accelerated innovation.

    The same holds true of pluggable coherent modules. 100 Gbps coherent modules based on the ZR specification debuted in 2017. The modules could deliver data approximately 80 kilometers and consumed approximately 4.5 watts per 100G of data delivered. Microsoft became an early adopter and used the modules to build a mesh of metro data centers1.

    Flash forward to 2020. Power per 100G dropped to 4W and distance exploded: 120k connections became possible with modules based on the ZR standard and 400k with the ZR+ standard. (An organization called OIF maintains the ZR standard. ZR+ is controlled by OpenROADM. Module makers often make both varieties. The main difference between the two is the amplifier: the DSPs, number of channels and form factors are the same.) ®

    The market responded. 400ZR/ZR+ became adopted more rapidly than any other technology in optical history, according to Cignal AI principal analyst Scott Wilkinson.

    “It opened the floodgates to what you could do with coherent technology if you put it in the right form factor,” he said during a recent webinar.

  • September 18, 2024

    Remembering Sehat Sutardja, Marvell Co-founder

    By Michael Kanellos, Head of Influencer Relations, Marvell

    Marvell co-founder, Sehat Sutardja, was a visionary leader, brilliant engineer, and a cherished colleague and friend to many at Marvell.

    Sehat’s journey began in Jakarta, Indonesia where he would build Van de Graaf generators and other devices with spare parts from his parents’ auto parts store. By 13, he was already a certified radio repair technician, showcasing his innate talent and curiosity. This early interest led him to pursue higher education in the United States, where he earned his bachelor’s degree in electrical engineering from Iowa State University, followed by a master’s and PhD in electrical engineering from the University of California, Berkeley.

    Stephen Lewis, now a professor of electrical and computing engineering at UC Davis, described Sehat as a perfectionist in an article for IEEE Spectrum. As students, they were building analog-to-digital converters. The traditional way to make them involved using two capacitors, one twice the size as the other. “He figured out a way to do it with two identical capacitors, increasing the amplifier speed by increasing its feedback. We had a solution that worked, but he kept digging until he found a better way to do it.”

    In 1995, Sehat, wife Weili Dai, and Sehat’s brother Pantas Sutardja founded Marvell Technology around a kitchen table. They chose the name Marvell because they wanted to build a company that could create ‘marvelous’ devices. The first product was a specialized read channel for hard drives that could be produced completely in silicon. Conventional wisdom was that the approach wouldn’t work, Sehat told students during a lecture at Berkeley in 2014. The device, however, reduced power consumption and production cost while elevating performance. Marvell soon became a trusted partner to many of the world’s leading technology companies.

    As an inventor and co-inventor, Sehat held over 440 patents. He was recognized as the Inventor of the Year by the Silicon Valley Intellectual Property Law Association and named a Fellow of IEEE. He also received the Indonesian Diaspora Lifetime Achievement Award for Global Pioneering and Innovation and frequently spoke at events such as the International Solid State Circuits Conference about the future of semiconductor design and computing.

    Beyond his professional accomplishments, Sehat was known for his humility, kindness, and generosity. He was a mentor to many, always willing to share his knowledge and insights. The Marvell team is grateful for his contributions and the legacy he leaves behind through his co-founding of our company.

  • July 11, 2024

    Bringing Payments to the Cloud with FIPS Certified LiquidSecurity®2 HSMs

    By Bill Hagerstrand, Director, Security Business, Marvell

    Payment-specific Hardware Security Modules (HSMs)—dedicated server appliances for performing the security functions for credit card transactions and the like—have been around for decades and not much has changed with regards to form factor, custom APIs, “old-school” physical user interfaces via Key Loading Devices (KLDs) and smart cards. Payment-specific HSMs represent 40% of the overall HSM TAM (Total Available Market), according to ABI Research1. 

    The first HSM was built for the financial market back in the early 1970s. However, since then HSMs have become the de facto standard for more General-Purpose (GP) use cases like database encryption and PKI. This growth has made HSM usage for GP applications 60% of the overall HSM TAM. Unlike Payment HSMs, where most deployments are 1U server form factors, GP HSMs have migrated to 1U, PCIe card, USB, and now semiconductor chip form factors, to meet much broader use cases. 

    The typical HSM vendors that offer both Payment and GP HSMs have opted to split their fleet. They deploy Payment specific HSMs that are PCI PTS HSM certified for payments and GP HSMs that are NIST FIPS 140-2/3 certified. If you are a financial institution that’s government mandated to deploy a fleet of Payment HSMs for processing payment transactions, but also have a database with Personally Identifiable Information (PII) data that needs to be encrypted to meet General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA), you would also need to deploy a separate fleet of GP HSMs. This would include two separate HW, two separate SW, and two operational teams to manage each. Accordingly, the associated CapEx/OpEx spending is significant. 

    For Cloud Service Providers (CSPs), the hurdle was insurmountable and forced many to deploy dedicated bare metal 1U servers to offer payment services in the cloud. These same restrictions that were forced on financial institutions were now making their way to CSPs. Also, this deployment model is contrary to why CSPs have succeeded in the past, which was to offer when they offered competitively priced services as needed on shared resources. 

  • June 18, 2024

    Custom Compute in the AI Era

    This article is the final installment in a series of talks delivered Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI demands are pushing the limits of semiconductor technology, and hyperscale operators are at the forefront of adoption—they develop and deploy leading-edge technology that increases compute capacity. These large operators seek to optimize performance while simultaneously lowering total cost of ownership (TCO). With billions of dollars on the line, many have turned to custom silicon to meet their TCO and compute performance objectives.

    But building a custom compute solution is no small matter. Doing so requires a large IP portfolio, significant R&D scale and decades of experience to create the mix of ingredients that make up custom AI silicon. Today, Marvell is partnering with hyperscale operators to deliver custom compute silicon that’s enabling their AI growth trajectories.

    Why are hyperscale operators turning to custom compute?

    Hyperscale operators have always been focused on maximizing both performance and efficiency, but new demands from AI applications have amplified the pressure. According to Raghib Hussain, president of products and technologies at Marvell, “Every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. They are not only achieving the highest performance, but also saving billions of dollars.”

    With multiple business models in the cloud, including internal apps, infrastructure-as-a-service (IaaS), and software-as-a-service (SaaS)—the latter of which is the fastest-growing market thanks to generative AI—hyperscale operators are constantly seeking ways to improve their total cost of ownership. Custom compute allows them to do just that. Operators are first adopting custom compute platforms for their mass-scale internal applications, such as search and their own SaaS applications. Next up for greater custom adoption will be third-party SaaS and IaaS, where the operator offers their own custom compute as an alternative to merchant options.

    Progression of custom silicon adoption in hyperscale data centers.

    Progression of custom silicon adoption in hyperscale data centers.

Archives