We’re Building the Future of Data Infrastructure

Products
Company
Support

Latest Articles

December 7th, 2021

Optical Technologies for 5G Access Networks

By Matt Bolig, Director, Product Marketing, Networking Interconnect, Marvell

There’s been a lot written about 5G wireless networks in recent years.  It’s easy to see why; 5G technology supports game-changing applications like autonomous driving and smart city infrastructure.  Infrastructure investment in bringing this new reality to fruition will take many years and 100’s of billions of dollars globally, as figure 1 below illustrates.

Figure 1: Cumulative Global 5G RAN Capex in $B (source: Dell’Oro, July 2021)

When considering where capital is invested in 5G, one underappreciated aspect is just how much wired infrastructure is required to move massive amounts of data through these wireless networks. 

Read More

December 6th, 2021

Marvell and Ingrasys Collaborate to Power Ceph Cluster with EBOF in Data Centers

By Khurram Malik, Senior Manager, Technical Marketing, Marvell

A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.

Power Ceph Cluster with EBOF in Data Centers

Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.

Read More

November 17th, 2021

Still the One: Why Fibre Channel Will Remain the Gold Standard for Storage Connectivity

By Todd Owens, Technical Marketing Manager, Marvell

For the past two decades, Fibre Channel has been the gold standard protocol in Storage Area Networking (SAN) and has been a mainstay in the data center for mission-critical workloads, providing high-availability connectivity between servers, storage arrays and backup devices. If you’re new to this market, you may have wondered if the technology’s origin has some kind of British backstory. Actually, the spelling of “Fibre” simply reflects the fact that the protocol supports not only optical fiber but also copper cabling; though the latter is for much shorter distances.

During this same period, servers matured into multicore, high-performance machines with significant amounts of virtualization. Storage arrays have moved away from rotating disks to flash and NVMe storage devices that deliver higher performance at much lower latencies. New storage solutions based on hyperconverged infrastructure have come to market to allow applications to move out of the data center and closer to the edge of the network. Ethernet networks have gone from 10Mbps to 100Gbps and beyond. Given these changes, one would assume that Fibre Channel’s best days are in the past.

The reality is that Fibre Channel technology remains the gold standard for server to storage connectivity because it has not stood still and continues to evolve to meet the demands of today’s most advanced compute and storage environments. There are several reasons Fibre Channel is still favored over other protocols like Ethernet or InfiniBand for server to storage connectivity.

Read More

November 9th, 2021

Network Visibility of 5G Radio Access Networks, Part 2

By Gidi Navon, Senior Principal Architect, Marvell

In part one of this blog, we discussed the ways the Radio Access Network (RAN) is dramatically changing with the introduction of 5G networks and the growing importance of network visibility for mobile network operators. In part two of this blog, we’ll delve into resource monitoring and Open RAN monitoring, and further explain how Marvell’s Prestera® switches equipped with TrackIQ visibility tools can ensure the smooth operation of the network for operators.

Resource monitoring

Monitoring latency is a critical way to identify problems in the network that result in latency increase. However, if measured latency is high, it is already too late, as the radio networks have already started to degrade. The fronthaul network, in particular, is sensitive to even a small increase in latency. Therefore, mobile operators need to ensure the fronthaul segment is below the point of congestion thus achieving extremely low latencies.

Visibility tools for Radio Access Networks need to measure the utilization of ports, making sure links never get congested. More precisely, they need to make sure the rate of the high priority queues carrying the latency sensitive traffic (such as eCPRI user plane data) is well below the allocated resources for such a traffic class.

A common mistake is measuring rates on long intervals. Imagine a traffic scenario over a 100GbE link, as shown in Figure 1, with quiet intervals and busy intervals. Checking the rate over long intervals of seconds will only reveal the average port utilization of 25%, giving the false impression that the network has high margins, without noticing the peak rate. The peak rate, which is close to 100%, can easily lead to egress queue congestion, resulting in buffer buildup and higher latencies.

Read More

October 20th, 2021

Low Power DSP-Based Transceivers for Data Center Optical Fiber Communications

By Radha Nagarajan, SVP and CTO, Optical and Copper Connectivity Business Group

As the volume of global data continues to grow exponentially, data center operators often confront a frustrating challenge: how to process a rising tsunami of terabytes within the limits of their facility’s electrical power supply – a constraint imposed by the physical capacity of the cables that bring electric power from the grid into their data center.

Fortunately, recent innovations in optical transmission technology – specifically, in the design of optical transceivers – have yielded tremendous gains in energy efficiency, which frees up electric power for more valuable computational work.

Recently, at the invitation of the Institute of Electrical and Electronics Engineers, my Marvell  colleagues Ilya Lyubomirsky, Oscar Agazzi and I published a paper detailing these technological breakthroughs, titled Low Power DSP-based Transceivers for Data Center Optical Fiber Communications.

Read More