We’re Building the Future of Data Infrastructure


Latest Articles

June 2nd, 2021

Breaking Digital Logjams with NVMe

By Ian Sagan, Marvell Field Applications Engineer

and Jacqueline Nguyen, Marvell Field Marketing Manager

and Nick De Maria, Marvell Field Applications Engineer

Have you ever been stuck in bumper-to-bumper traffic? Frustrated by long checkout lines at the grocery store? Trapped at the back of a crowded plane while late for a connecting flight?

Such bottlenecks waste time, energy and money. And while today’s digital logjams might seem invisible or abstract by comparison, they are just as costly, multiplied by zettabytes of data struggling through billions of devices – a staggering volume of data that is only continuing to grow.

Fortunately, emerging Non-Volatile Memory Express technology (NVMe) can clear many of these digital logjams almost instantaneously, empowering system administrators to deliver quantum leaps in efficiency, resulting in lower latency and better performance. To the end user this means avoiding the dreaded spinning icon and getting an immediate response.

What is NVMe®?

NVMe is a purpose-built protocol for NVMe SSDs (solid state drives based on NAND Flash storage media). This set of industry-standard technical specifications, developed by a non-profit consortium called NVM Express, defines how host software communicates with flash storage across a PCI Express® bus. As noted in the recent Marvell Whitepaper on NVMe-over-Fabrics (NVMe-oF), these specs comprise:

  • NVMe, which is a command set that is efficient and manageable, with a faster queuing mechanism and scalable for multi-core CPUs.
  • NVMe Management Interface (NVMe-MI), which is a command set and architecture that can use a Baseboard Management Controller to discover, monitor, and update NVMe devices.
  • NVMe-oF, which will be the technology that extends NVMe beyond the server’s PCIe lanes, out into the network/fabric for greater scalability. Examples include Fibre Channel and Ethernet (RDMA and TCP) transports.

Why use NVMe?

The use of NVMe can radically lower latency and improve the speed of data retrieval and storage, both of which are critical in this era of burgeoning data. Specifically, NVMe:

  • Is more streamlined, with fewer command sets and fewer clock cycles per IO, making it faster and more efficient than legacy storage protocols such as SCSI, SAS and SATA
  • Is designed to deliver higher bandwidth and lower latency storage access
  • Offers more command queues and deeper command queues than the legacy protocols

Can I afford NVMe?

As an emerging standard, is NVMe too expensive for many system administrators? No. While it’s true that breakthrough technology often costs more at the outset, costs fall as demand rises, and as administrators gain deeper insights into the total cost of ownership in both client and enterprise applications.

Total cost of ownership is critical, because even as IT budgets continue to shrink, the needs of today’s workforce continue to grow. So administrators need to consider:

  • When is it better to patch a datacenter than undertake a full refresh?
  • What are the tradeoffs between familiarity, accessibility, and breakthrough performance?
  • What is the transaction cost of changing to new protocols?

The bottom line is that all Flash Storage arrays are becoming more mainstream; more cost-effective midrange storage is becoming widely available; and NVMe adoption for server and storage manufacturers is becoming the new standard. Today, NVMe is commonly used as a caching tier for accelerating applications’ access to data.

And as more and more administrators embrace NVMe-oF technology, including Fibre Channel (FC-NVMe) and Ethernet (NVMe/RoCE or NVMe/TCP), the advantages for users multiply. Ultimately, we all want our employees and customers to be happy, and NVMe helps achieve that.

What components do I need to implement NVMe?

For administrators considering a shift to NVMe, two major categories of solutions exist:

  1. Hyper Converged Infrastructure (HCI) – These include solutions like VMware vSAN, Microsoft AzureStack HCI, Nutanix and others. The major components needed for success are:
    • A modern server with enough PCIe interfaces to host local NVMe drives for caching and/or capacity
    • Marvell® FastLinQ® 10/25GbE NICs with Universal RDMA (RoCEv2 and iWARP) for high speed, high intercluster network connectivity
  2. External Block Based Storage (All Flash Array) – This solution delivers disaggregated storage where multiple applications access pools of NVMe for application acceleration
    • A modern server, typically virtualized and hosting multiple applications that need access to high speed storage
    • Marvell QLogic® FC HBAs with concurrent FC-SCSI and FC-NVMe capabilities – all Enhanced 16GFC and 32GFC HBAs (or)
    • Marvell FastLinQ 10/25GbE NICs with NVMe/RDMA or NVMe/TCP capabilities
    • Supported Storage Array with All-Flash NVMe, several available in the market
    • Latest operating systems with support for NVMe over Fabrics like Linux and VMware ESXi 7.0 onwards.

Why administrators should embrace NVMe today

HDDs expanded to SSDs; and NVMe drives further optimization of latency, performance, CPU utilization and overall improvement in application responsiveness. This will help drive another storage shift over the next five years. So with data demands growing exponentially and user expectations rising too, there is no better time to future-proof your storage than now. After all, who likes waiting in line?

For early adopters, NVMe and NVMe-oF delivers immediate benefits – dramatic savings in time, energy and total cost of ownership – paying dividends for years to come.

Additional NVMe information:

April 29th, 2021

Back to the Future – Automotive network run at speed of 10Gbps

By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

In the classic 1980s “Back to the Future” movie trilogy, Doc Brown – inventor of the DeLorean time machine – declares that “your future is whatever you make it, so make it a good one.” At Marvell, engineers are doing just that by accelerating automotive Ethernet capabilities: Earlier this week, Marvell announced the latest addition to its automotive products portfolio – the 88Q4346 802.3ch-based multi-gig automotive Ethernet PHY.

This technology addresses three emerging automotive trends requiring multi-gig Ethernet speeds, including:

  1. The increasing integration of high-resolution cameras and sensors
  2. Growing utilization of powerful 5G networks
  3. The rise of Zonal Architecture in car design

1. Increasing integration of high-resolution cameras and sensors

The main applications for a high-speed network in a car are cameras and displays with uncompressed video streams. Cameras are now upgrading from resolutions of 720p to 1080p, moving to 4K and even 8K within a few years. In addition, the pixel size (color depth) has increased to 16 and 24 bits per pixel, with the refresh rate moving to 60 frames per second. The result? The bandwidth needed to carry the high-resolution video will need to grow correspondingly. While 1Gbps used to be sufficient, there are now many use cases that require data rates up to 10Gbps and beyond (see Figure 1).

Infotainment and high-resolution displays are also driving the need for high-speed video over the in-vehicle backbone. At last year’s CES in Las Vegas (remember those big trade shows, before Covid?) the Byton M-Byte’s jumbo, 48-inch interior screen grabbed headlines. But the revolution in car displays is really just starting. Dashboard displays with integrating touchscreen, controls and ambient lighting in complex 3D forms will be common features in future cars. Holograms and Changeable Surface Textures are new technologies that automotive designers are integrating into display screens.

Technologies such as Apple CarPlay and Android Auto, as well as Google-based infotainment suites, effectively transform these displays into extensions of our smartphones and smart homes, including support for video-conferencing that combines high-res displays and cameras.

Similarly, side mirrors – while still useful – are seeing their roles supplanted as well, as cameras integrating images of surrounding traffic now cover a much greater field of view. Soon, augmented reality technologies, based on high-resolution OLED displays and fish-eye cameras, will also help warn us of potential trouble. Some OEMs are now planning up to 12 screens – a virtual megaplex inside the vehicle.

The upshot? The in-vehicle network (IVN) will need to support unprecedented bandwidth.

In addition to the cameras and displays, the car sensors, like radar and lidar, also rely on higher data speeds. Today, existing high-bandwidth sensor modules integrate a pre-processing IC for initial data analysis and object detection. The processed data is transmitted to the SoC/GPU over low-speed interfaces like CAN (up to 10Mbps) or 100M Ethernet links.

However, with the latest generations of high-compute power SoCs/GPUs handling the growing sensor data, the new trend is to eliminate completely the pre-processing units in the sensor module (see Figure 2). This shift will help to reduce:

  1. The cost of sensor modules
  2. The heat dissipation that pre-processing units in the sensor create, which degrades sensor performance.
  3. The latency created by the pre-processing algorithms (improve detection and response time).

To support this trend, these sensors are already upgrading from 100Mbps to 1Gbps Ethernet links, and within a few years are expected to require multi-gigabit rates. 

2. Growing Utilization of Powerful 5G Networks

While the COVID-19 pandemic and resulting quarantine dramatically reduced the time Americans spent commuting by car, as well as the number of cars sold, a new generation of vehicles promises to transform the automotive landscape in coming years. Not just more efficient electric vehicles, but more autonomous, connected vehicles empowered by the rollout of powerful 5G networks.

What are the advantages of 5G over current networks (like 4G)? To start with, 5G is fast. Real fast. From a peak speed perspective, 5G is 100x faster than 4G. This means almost instantaneous downloading of HD maps, movies, or over-the-air (OTA) applications and software updates.

Latency in 5G networks is also extremely low, enabling direct communication between of vehicles to everything (V2X) – vehicles, pedestrian, infrastructure, and cloud (Figure 3). Until now, vehicles had to communicate via a distant server. With 5G, vehicles can communicate directly with each other, exchanging key information about accidents, road conditions and other alerts. This 5G network is critical to autonomous transportation, which is designed to be safer, more efficient, and more sustainable. It will also offer passengers a rich experience in the car and better ways to use their time while on the road. This includes using interactive applications, watching movies, playing games or preparing for meetings as easily as they do at home.

To support 5G’s high speeds and low latency, automotive 5G modems and backbone networks within the vehicle will need to support speeds of 10Gbps and higher – speeds that Marvell’s multi-gig Ethernet PHY already enables. In addition, this technology enables OEM’s to leverage Ethernet’s high quality-of-service features to meet the demands of emerging vehicle-to-vehicle communication and related infrastructure.

3. The Rise of Zonal Architecture

In many vehicles, the wiring that connects the electronic and electric components is so complex and extensive that, were it laid end to end, all those cables would stretch over a mile. This cable spaghetti is also very heavy. Obviously, complexity and weight both add significant costs, first in terms of labor, as well as energy consumption while driving.

Despite these drawbacks, such spaghetti is still the norm rather than the exception. That’s because many of today’s new vehicles are still built around the concept of Domain Architecture, where extensive cable harnesses connect independent domains such as Body, Telematics, ADAS, Infotainment and Powertrain directly to components that are spread all over the car. Figure 4 offers a simplified diagram of a typical harness system.

Shedding the constraints of traditional wired harnesses, the most advanced vehicle designers are shifting toward Zonal Architecture, which leverages Ethernet as the backbone protocol between the car’s different zones. To support this concept, each zone of the car includes a Zonal Gateway – a switch that utilizes Ethernet to convert and aggregate the different domains and protocols, while preserving each protocol’s desired performance and quality of service (QoS). As Figure 5 illustrates, Zonal Architecture dramatically streamlines automotive wiring requirements, reducing cable, labor and energy costs.

The shift to Zonal Architecture becomes even more powerful when augmented with in-vehicle centralized computing and storage – an approach that is just emerging in the automotive industry. With a backbone that can leverage the speed and the benefits of Ethernet (switching, QoS, security, virtualization) and the growing computing power of the latest CPUs, many OEMs are now gravitating toward a central processing architecture (see Figure 6).

Along with the central processing comes the need for central storage, to store the most relevant data in a way that is secure, fast, easy and reliable to access.

To be able to aggregate the high bandwidth data of each zone over the zonal architecture, the Ethernet backbone needs to support data speeds of 10Gbps and higher.

Marvell track record with Multi-gig Ethernet

Marvell was the first company to introduce 10G Automotive Ethernet PHY (AQV107), in 2018, which supported 2.5G, 5G and 10Gbps speeds over automotive cables. Next, Marvell supported the development of a new IEEE standard for Multi-gigabit Ethernet Automotive PHY (IEEE’s 802.3ch), and is now sampling to customers the 88Q4346, IEEE compliant 10GBASE-T1 PHY.

In addition, the new line of automotive Ethernet switches that Marvell introduced in 2020 included multi-gigabit ports. The 88Q5072 is a highly integrated 11-port managed secured switch that support 2.5Gbps and 5Gbps ports, and the 88Q6113 supports two ports of 10Gbps.

What’s next? In 2019 Marvell initiated a call-for-interest (CFI) in IEEE for a new standard for Automotive Ethernet PHY at rates “beyond 10G”. As a result, a new IEEE group (802.3cy) is currently developing a standard for 25G, 50G and 100Gbps automotive PHY.

In summary, the decade ahead will be transformative for the automobile industry, as OEMs leverage remarkable innovations in Ethernet to integrate more high-resolution cameras and sensors, tap the power of 5G networks, and implement Zonal Architecture. And with Marvell’s help, the future may arrive faster than anyone, even Doc Brown, ever expected.

April 2nd, 2021

Marvell Enables O-RAN to Help 5G Fulfill its True Potential

By Marvell, PR Team

At the most recent FierceWireless 5G Blitz Week, some of the world’s leading 5G innovators met via webinar to discuss the potential of O-RAN and challenges of the ongoing 5G rollout. In a keynote, EVP and General Manager of Marvell’s Processors Business Group Raj Singh explored the accelerating shift to O-RAN, which is an emerging open-source architecture for Radio Access Networks that enables customers to create better 5G applications by mixing and matching RAN technology from different vendors.

O-RAN architectures are compelling because they increase competition among vendors, reduce costs, and offer customers greater flexibility to combine RAN elements according to their application’s specific use cases. However, in addition to their obvious benefits, O-RAN solutions also raise operator concerns including potential challenges with integration, legacy support, interoperability and security – issues that Marvell and other companies in the Open RAN Policy Coalition are addressing through shared standards, proven solutions and innovative approaches.

As Raj noted, open RAN doesn’t change what is being processed, but where. Marvell’s O-RAN solutions address Radio Units, Distributed Units and Centralized Units. We provide reference hardware designs and protocol stack software implementations. He added that while the transition from RAN to O-RAN may take years, the market need is urgent, because general-purpose processors are sub-optimal for L1 functions.

In a separate roundtable discussion on O-RAN, Raj joined panelists from Rethink Technology Research, Vodafone, Facebook Connectivity, and Analog Devices to explore ways in which network operators, technology influencers and system integrators are working to expedite the availability of O-RAN in the 5G marketplace.

Panelists noted that once the O-RAN transition is complete, it will offer customers greater supply chain flexibility and diversity, better performance, more collaboration, bigger economies of scale, lower capital expenditures, and also drive further innovation.

That said, work remains in translating this vision into reality, because customers are unwilling to sacrifice familiar features, standards and security during the transition. “You can’t just say ‘oh, it’s Open-RAN, and it doesn’t have to do this, or doesn’t have to do that,” Raj said. “You have to have the same capacity, reliability, and feature parity that exists in networks today.”

These discussions with industry leaders demonstrate the significant inroads being made in advancing the ORAN architecture.

Fierce Wireless 5G Spring Blitz replay available here

ORAN Round Table Replay available here

January 29th, 2021

Full Steam Ahead! Marvell Ethernet Device Bridge Receives Avnu Certification

By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

and John Bergen, Sr. Product Marketing Manager, Automotive Business Unit, Marvell

In the early decades of American railroad construction, competing companies laid their tracks at different widths. Such inconsistent standards drove inefficiencies, preventing the easy exchange of rolling stock from one railroad to the next, and impeding the infrastructure from coalescing into a unified national network. Only in the 1860s, when a national standard emerged – 4 feet, 8-1/2 inches – did railroads begin delivering their true, networked potential.

Some one hundred-and-sixty years later, as Marvell and its competitors race to reinvent the world’s transportation networks, universal design standards are more important than ever. Recently, Marvell’s 88Q5050 Ethernet Device Bridge became the first of its type in the automotive industry to receive Avnu certification, meeting exacting new technical standards that facilitate the exchange of information between diverse in-car networks, which enable today’s data-dependent vehicles to operate smoothly, safely and reliably.

Avnu, the industry alliance focused on promoting an interoperable ecosystem based on IEEE 802.1 standards for Time Sensitive Networking (TSN), issues product certifications based on testing by approved third-party labs. In this case, the 88Q5050 was tested and approved by the University of New Hampshire InterOperability Laboratory, known as UNH-IOL.

The TSN standards defined by IEEE provide a “tool box” of specifications designed to meet the networking requirements of today’s Automotive, Industrial, and Professional A/V industries.   Designed to enable low latency, time aware networking, with guaranteed delivery of time critical data across today’s Ethernet networks, the TSN specifications encompass 17 approved standards to date, with more continuing to be developed. 

Issues of precise timing and low latency are critical in many applications, but especially so in today’s motor vehicles, whose drivers rely on instant and reliable feedback from cameras, blind-spot indicators, lidar, radar and other safety systems. Marvell’s product line, of which the 88Q5050 is just one device, provides automotive companies and suppliers with the means – and confidence – to know that critical data will not be competing for bandwidth, and that information will flow quickly, consistently and accurately, without interference from potentially malicious actors.

Figure 1 shows a typical automotive architecture with cameras, lidar, and radar enabled as AVB/TSN talkers, with AVB/TSN switches (such as the AVB enabled 88Q5050) used to provide guaranteed delivery of low latency, time critical sensor data to the Automotive CPUs, acting as AVB listeners, for processing.

Figure 1: Automotive Architecture

Figure 2: AVB Topology

The 88Q5050 is an 8-port Ethernet switch offering 4 fixed IEEE 100BASE-T1 ports, and a configurable selection of an additional 4 ports from 1x IEEE 100BASE-T1 port, 1x IEEE 100BASE-TX, 2x MII/RMII/RGMII ports,1 GMII port, and 1 SGMII port. The switch offers local and remote management capabilities, for easy access and configuration of the device.

The switch also employs the highest hardware security features, designed at the root source of the switch to prevent hacks or compromises to data streamed within the vehicle – such as through an unguarded tire pressure sensor, or any other unexpected vulnerability.

This advanced switch – an end-to-end solution – employs deep packet inspection techniques and Trusted Boot functionality to deliver the industry’s most secure automotive Ethernet switch. To further enhance security, it supports both blacklisting and whitelisting addresses on all its Ethernet ports. Guaranteed to perform at or beyond its specifications, it will open up entirely new avenues to the future, and it has already secured design wins with several leading automotive OEMs.

Figure 3: 88Q5050 Block Diagram

High universal standards, however, don’t just emerge out of thin air. They are the result of committed partnership, as stakeholders seek the efficiencies and benefits that flow from standardization, quality and reliability. In establishing America’s unified railroad gage, leaders eventually decided to go with the existing European standard, because American engineers wanted the flexibility to import British locomotives, known for their power and reliability. But where did that European standard itself originate? After all, 4 feet 8-1/2 inches seems like an idiosyncratic, if not arbitrary, measurement.

Some historians have traced its measurement to the standard width of Roman roads and bridges, built in ancient times for chariots – vehicles whose size was limited by what an average horse could pull. [1] Like a spiderweb across Europe, the Middle East and North Africa, this Roman network comprised an estimated 50,000 miles of paved roads, connecting an empire and changing history.

Today, Marvell’s commitment to invest and advance the frontiers of automotive silicon, the standards we champion, and the networks they enable, just might change the future.

January 14th, 2021

What’s Next in System Integration and Packaging? New Approaches to Networking and Cloud Data Center Chip Design

By Wolfgang Sauter, Customer Solutions Architect - Packaging, Marvell

The continued evolution of 5G wireless infrastructure and high-performance networking is driving the semiconductor industry to unprecedented technological innovations, signaling the end of traditional scaling on Single-Chip Module (SCM) packaging. With the move to 5nm process technology and beyond, 50T Switches, 112G SerDes and other silicon design thresholds, it seems that we may have finally met the end of the road for Moore’s Law.1 The remarkable and stringent requirements coming down the pipe for next-generation wireless, compute and networking products have all created the need for more innovative approaches. So what comes next to keep up with these challenges? Novel partitioning concepts and integration at the package level are becoming game-changing strategies to address the many challenges facing these application spaces.

During the past two years, leaders in the industry have started to embrace these new approaches to modular design, partitioning and package integration. In this paper, we will look at what is driving the main application spaces and how packaging plays into next-generation system  architectures, especially as it relates to networking and cloud data center chip design.

What’s Driving Main Application Spaces?

First, let’s take a look at different application spaces and how package integration is critical to enable the next-generation product solutions. In the wireless application space, the market can be further subdivided into handheld and infrastructure devices. Handheld devices in this space are driven by ultimate density, memory and RF integration to support power and performance requirements, while achieving reasonable consumer price points. Wireless infrastructure products in support of 5G will drive antenna array with RF integration, and on the baseband side, require a modular approach to enable scalable products that meet power, thermal and cost requirements in a small area. In the datacenter, next-generation products will need next-node performance and power efficiency to keep up with demand. Key drivers here are the insatiable need for memory bandwidth and the switch to scalable compute systems with high chip-to-chip bandwidth. Wired networking products already need more silicon area than can fit in a reticle, along with more bandwidth between chips and off-module. This pushes design toward larger package sizes with lower loss, as well as a huge amount of power coupled with high-bandwidth memory (HBM) integration.

The overarching trend then is to integrate more function (and therefore more silicon) into any given product. This task is especially difficult when many of the different functions don’t necessarily want to reside on the same chip. This includes: IO function, analog and RF content, and DRAM technologies. SoCs simply can’t fit all the content needed into one chip. In addition, IP schedules versus technology readiness aren’t always aligned. For instance, processors for compute applications may be better suited to move to the next node, whereas interface IP, such as SerDes, may not be ready for that next node until perhaps a year later.

How does the package play into this?

All of these requirements mean we as semiconductor solution providers must now get more than Moore out of the package meaning: we need to get more data and more functionality out of the package, while driving more cost out.

As suitable packaging solutions become increasingly complex and expensive, the need to focus on optimized architectures becomes imperative. The result is a balancing act between the cost, area and complexity of the chip versus the package. Spending more on the package may be a wise call if it helps to significantly reduce chip cost (e.g. splitting a large chip in two halves). But the opposite may be true when the package complexity starts overwhelming the product cost, which can now frequently be seen on complex 2.5D products with HBM integration. Therefore, the industry is starting to embrace new packaging and architectural concepts such as modular packages, chiplet design with chip-to-chip interfaces, or KGD integrated packages. An example of this was the announcement of the AMD Epyc 2 Rome chiplet design which marries its 7nm Zen 2 Cores with 14nm I/O die. As articulated in the introductory review by Anton Shilov of AnandTech at the time of its announcement, “Separating CPU chiplets from the I/O die has its advantages because it enables AMD to make the CPU chiplets smaller as physical interfaces (such as DRAM and Infinity Fabric) do not scale that well with shrinks of process technology. Therefore, instead of making CPU chiplets bigger and more expensive to manufacture, AMD decided to incorporate DRAM and some other I/O into a separate chip.”

These new approaches are revolutionizing chip design as we know it. As the industry moves toward modularity, interface IP and package technology must be co-optimized. Interface requirements must be optimized for low power and high efficiency, while enabling a path to communicate with chips from other suppliers. These new packaging and systems designs must also be compatible with industry specs. The package requirements must enable lower loss in the package while also enabling higher data bandwidth (i.e. a larger package, or alternative data transfer through cables, CPO, etc.).

What’s Next for Data Center Packaging and Design?

This is the first in a two-part series about the challenges and exciting breakthroughs happening in systems integration and packaging as the industry moves beyond the traditional Moore’s Law model. In the next segment we will discuss how packaging and deep package expertise are beginning to share center stage with architecture design to create a new sweet spot for integration and next-generation modular design. We will also focus on how these new chip offerings will unleash opportunities specifically in the data center including acceleration, smartNICs, process, security and storage offload. As we embark on this new era of chip design, we will see how next-generation ASICs will help meet the expanding demands of wired networking and Cloud Data Center chip design to power the data center all the way to the network edge.

# # #

1 Moore’s Law, an observation or projection articulated in 1971 stated that the number of transistors in integrated circuit chips would double every two years.