By Todd Owens, Field Marketing Director, Marvell
For the past two decades, Fibre Channel has been the gold standard protocol in Storage Area Networking (SAN) and has been a mainstay in the data center for mission-critical workloads, providing high-availability connectivity between servers, storage arrays and backup devices. If you’re new to this market, you may have wondered if the technology’s origin has some kind of British backstory. Actually, the spelling of “Fibre” simply reflects the fact that the protocol supports not only optical fiber but also copper cabling; though the latter is for much shorter distances.
During this same period, servers matured into multicore, high-performance machines with significant amounts of virtualization. Storage arrays have moved away from rotating disks to flash and NVMe storage devices that deliver higher performance at much lower latencies. New storage solutions based on hyperconverged infrastructure have come to market to allow applications to move out of the data center and closer to the edge of the network. Ethernet networks have gone from 10Mbps to 100Gbps and beyond. Given these changes, one would assume that Fibre Channel’s best days are in the past.
The reality is that Fibre Channel technology remains the gold standard for server to storage connectivity because it has not stood still and continues to evolve to meet the demands of today’s most advanced compute and storage environments. There are several reasons Fibre Channel is still favored over other protocols like Ethernet or InfiniBand for server to storage connectivity.
By Gidi Navon, Senior Principal Architect, Marvell
In part one of this blog, we discussed the ways the Radio Access Network (RAN) is dramatically changing with the introduction of 5G networks and the growing importance of network visibility for mobile network operators. In part two of this blog, we’ll delve into resource monitoring and Open RAN monitoring, and further explain how Marvell’s Prestera® switches equipped with TrackIQ visibility tools can ensure the smooth operation of the network for operators.
Resource monitoring
Monitoring latency is a critical way to identify problems in the network that result in latency increase. However, if measured latency is high, it is already too late, as the radio networks have already started to degrade. The fronthaul network, in particular, is sensitive to even a small increase in latency. Therefore, mobile operators need to ensure the fronthaul segment is below the point of congestion thus achieving extremely low latencies.
Visibility tools for Radio Access Networks need to measure the utilization of ports, making sure links never get congested. More precisely, they need to make sure the rate of the high priority queues carrying the latency sensitive traffic (such as eCPRI user plane data) is well below the allocated resources for such a traffic class.
A common mistake is measuring rates on long intervals. Imagine a traffic scenario over a 100GbE link, as shown in Figure 1, with quiet intervals and busy intervals. Checking the rate over long intervals of seconds will only reveal the average port utilization of 25%, giving the false impression that the network has high margins, without noticing the peak rate. The peak rate, which is close to 100%, can easily lead to egress queue congestion, resulting in buffer buildup and higher latencies.
Copyright © 2026 Marvell, All rights reserved.