-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Archive for the ‘Data Center’ Category

August 31st, 2020

Arm processors in the Data Center

By Raghib Hussain, Chief Strategy Officer and Executive Vice President, Networking and Processors Group

Last week, Marvell announced a change in our strategy for ThunderX, our Arm-based server-class processor product line. I’d like to take the opportunity to put some more context around that announcement, and our future plans in the data center market.

ThunderX is a product line that we started at Cavium, prior to our merger with Marvell in 2018. At Cavium, we had built many generations of successful processors for infrastructure applications, including our Nitrox security processor and OCTEON infrastructure processor. These processors have been deployed in the world’s most demanding data-plane applications such as firewalls, routers, SSL-acceleration, cellular base stations, and Smart NICs. Today, OCTEON is the most scalable and widely deployed multicore processor in the market.

As co-founder of Cavium, I had a strong belief that Arm-based processors also had a role to play in next generation data centers. One size simply doesn’t fit all anymore, so we started the ThunderX product line for the server market. It was a bold move, and we knew it would take significant time and investment to come to fruition. In fact, we have spent six years now building multiple generations of products, developing the ecosystem, the software, and working with customers to qualify systems for production deployment in large data centers. ThunderX2 was the industry’s first Arm-based processor capable of powering dual socket servers that could go toe-to-toe with x86-based solutions, and clearly established the performance credentials for Arm in the server market. We moved the bar higher yet again with ThunderX3, as we discussed at Hot Chips 32.

Today, we see strong ecosystem support and a significant opportunity for Arm-based processors in the data center. But the real market opportunity for server-class Arm processors is in customized solutions, optimized for the use cases at hyperscale data center operators. This should be no surprise, as the power of the Arm architecture has always been in its ability to be integrated into highly optimized designs tailored for specific use cases, and we see hyperscale datacenter applications as no different.

Our rich IP portfolio, decades of processor expertise with Nitrox, OCTEON, and ThunderX, combined with our new custom ASIC capability, and investment in the latest TSMC 5nm process node, puts Marvell in a unique position to address this market opportunity. So to us, this market driven change just makes sense. We look forward to partnering with our customers and helping to deliver highly optimized solutions tailored to their unique needs.

August 27th, 2020

How to Reap the Benefits of NVMe over Fabric in 2020

By Todd Owens, Technical Marketing Manager, Marvell

As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

A screenshot of a cell phone

Description automatically generated

Unfortunately, most of the NVMe in use today is held captive in the system in which it is installed. While there are a few storage vendors offering NVMe arrays on the market today, the vast majority of enterprise datacenter and mid-market customers are still using traditional storage area networks, running SCSI protocol over either Fibre Channel or Ethernet Storage Area Networks (SAN).

The newest storage networks, however, will be enabled by what we call NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offer users a choice of transport protocols. Today, there are three standard protocols that will likely make significant headway into the marketplace. These include:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however, there are three major elements that need to align. First, users will need an NVMe-capable storage network infrastructure in place. Second, all of the major operating system (O/S) vendors will need to provide support for NVMe-oF. Third, customers will need disk array systems that feature native NVMe. Let’s look at each of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SAN connectivity vendors support one or more varieties of NVMe-oF infrastructure today. This storage network infrastructure (also called the storage fabric), is made up of two main components: the host adapter that provides server connectivity to the storage fabric; and the switch infrastructure that provides all the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host bus adapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes the Marvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 Series Enhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are needed to transition from SCSI-based connectivity to NVMe technology, as the FC switch is agnostic about the payload data. The job of the FC switch is to just route FC frames from point to point and deliver them in order, with the lowest latency required. That means any 16GFC or greater FC switch is fully FC-NVMe compatible.

A key decision regarding FC-NVMe infrastructure, however, is whether or not to support both legacy SCSI and next-generation NVMe protocols simultaneously. When customers eventually deploy new NVMe-based storage arrays (and many will over the next three years), they are not going to simply discard their existing SCSI-based systems. In most cases, customers will want individual ports on individual server HBAs that can communicate using both SCSI and NVMe, concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does support concurrent SCSI and NVMe, all with the same firmware and a single driver. This use of a single driver greatly reduces complexity compared to alternative solutions, which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transport protocol for storage networks, there is one option for NVMe-oF connectivity today and a second option on the horizon. Currently, customers can already deploy NVMe/RoCE infrastructure to support NVMe connectivity to shared storage. This requires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernet switching that is configured to support a lossless Ethernet environment. There are a variety of 10/25/50/100GbE network adapters on the market today that support RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000 Series adapters. 

On the switching side, most 10/25/100GbE switches that have shipped in the past 2-3 years support data center bridging (DCB) and priority flow control (PFC), and can support the lossless Ethernet environment needed to support a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enable these features and set up the lossless fabric, these features will likely be supported in any newer Ethernet switch or director. One point of caution: with lossless Ethernet networks, scalability is typically limited to only 1 or 2 hops. For high scalability environments, consider alternative approaches to the NVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively new protocol (NVM Express Group ratification in late 2018), and as such is not widely available today. However, the advantage of NVMe/TCP is that it runs on today’s TCP stack, leveraging TCP’s congestion control mechanisms. That means there’s no need for a tuned environment (like that required with NVMe/RoCE), and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in the same way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide good performance, work with existing infrastructure, and be highly scalable. For those customers seeking the best mix of performance and ease of implementation, NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

A screenshot of a cell phone

Description automatically generated
  • Operating System Support

While it’s great that there is already infrastructure to support NVMe-oF implementations, that’s only the first part of the equation. Next comes O/S support. When it comes to support for NVMe-oF, the major O/S vendors are all in different places – see the table below for a current (August 2020) summary. The major Linux distributions from RHEL and SUSE support both FC-NVMe and NVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi 7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP. Microsoft Windows Server currently uses an SMB-direct network protocol and offers no support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware does not currently support FC-NVMe or NVMe/RoCE in vSAN or with vVols implementations. However, support for these configurations, along with support for NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterprise class storage arrays that are NVMe-native. NetApp sells arrays that support both NVMe/RoCE and FC-NVMe, and are available today. Pure Storage offers NVMe arrays that support NVMe/RoCE, with plans to support FC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line of flash storage that supports FC-NVMe. This year and next, other storage vendors will be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe. We expect storage arrays that support NVMe/TCP will become available in the same time frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Altogether, we are not too far away from having all the elements in place to make NVMe-oF a reality in the data center. If you expect the servers you are deploying today to operate for the next five years, there is no doubt they will need to connect to NVMe-native storage during that time. So plan ahead.

The key from an I/O and infrastructure perspective is to make sure you are laying the groundwork today to be able to implement NVMe-oF tomorrow. Whether that’s Fibre Channel or Ethernet, customers should be deploying I/O technology that supports NVMe-oF today. Specifically, that means deploying 16GFC enhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SAN connectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series Fibre Channel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 series Ethernet adapter technology.

These advances represent a big leap forward and will deliver great benefits to customers. The sooner we build industry consensus around the leading protocols, the faster these benefits can be realized.

For more information on Marvell Fibre Channel and Ethernet technology, go to www.marvell.com. For technology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.

August 20th, 2020

Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

By Todd Owens, Technical Marketing Manager, Marvell

Hewlett Packard Enterprise (HPE) recently updated its product naming protocol for the Ethernet adapters in its HPE ProLiant and HPE Apollo servers. Its new approach is to include the ASIC model vendor’s name in the HPE adapter’s product name. This commonsense approach eliminates the need for model number decoder rings on the part of Channel Partners and the HPE Field team and provides everyone with more visibility and clarity. This change also aligns more with the approach HPE has been taking with their “Open” adapters on HPE ProLiant Gen10 Plus servers. All of this is good news for everyone in the server sales ecosystem, including the end user. The products’ core SKU numbers remain the same, too, which is also good.

For HPE Ethernet adapters for HPE ProLiant Gen10 Plus and HPE Apollo Gen10 Plus servers, the name changes were fairly basic. Under this new naming protocol, HPE moved the name of the adapter’s manufacturer to the front and added “for HPE” to the end. For example, what was previously named “HPE Ethernet 10/25Gb 2-port SFP28 QL41232HLCU Adapter” is now “Marvell QL41232HLCU Ethernet 10/25Gb 2-port SFP28 Adapter for HPE”. The model number, QL41232HLCU, did not change.

The table below shows the new naming for the HPE adapters using Marvell FastLinQ I/O technology and makes it very easy to match up ASIC technology, connection type and form factor across the different products.

HPE SKU ORIGINAL HPE MODEL NEW SKU DESCRIPTION

867707-B21

521T

HPE Ethernet 10Gb 2-port BASE-T QL41401-A2G Adapter

P08446-B21

524SFP+

HPE Ethernet 10Gb 2-port SFP+ QL41401-A2G Adapter

652503-B21

530SFP+

HPE Ethernet 10Gb 2-port SFP+ 57810S Adapter

656596-B21

530T

HPE Ethernet 10Gb 2-port BASE-T 57810S Adapter

700759-B21

533FLR-T

HPE FlexFabric 10Gb 2-port FLR-T 57810S Adapter

700751-B21

534FLR-SFP+

HPE FlexFabric 10Gb 2-port FLR-SFP+ 57810S Adapter

764302-B21

536FLR-T

HPE FlexFabric 10Gb 4-port FLR-T 57840S Adapter

867328-B21

621SFP28

HPE Ethernet 10/25Gb 2-port SFP28 QL41401-A2G Adapter

867334-B21

622FLR-SFP28

HPE Ethernet 10/25Gb 2-port FLR-SFP28 QL41401-A2G CNA


Inevitably, there are a few challenges with the new approach, especially for the adapters used in Gen10 servers. The first is that the firmware in the adapters is not changing. So, when a customer boots up the server, the old model information, such as 524SFP+, will be displayed on the system management screens. The same applies to information passed from the adapter to other management software, such as HPE Network Orchestrator. However, in HPE’s configuration tools – One Config Advanced (OCA) – only the new names and model numbers appear, with no mention of the original numbers. This could create confusion when you’re configuring a system and it boots up, displaying a different model number than the one you are actually using.

Additionally, it is going to take some time for operating system vendors like VMware and Microsoft to update their hardware compatibility listings. Today, you can go to the VMware Compatibility Guide (VCG) and search on a 621SFP28 with no problem. But search on a QL41401 or QL41401-A2G, and you will come up empty. HPE is also working on updating its QuickSpec documents with the new naming, and that will take some time as well.

So, while the model number decoder rings are no longer required, you will need to have easy to access cross references to match the new name to the old model. To support you on this, we have updated all our key collateral for HPE-specific Marvell® FastLinQ® Ethernet adapters on the Marvell HPE Microsite. These documents were updated to include not only the new product names that HPE has implemented, but the original model number references as well.

Here are some links to the updated collateral:

Why Marvell FastLinQ for HPE? First, we are a strategic supplier to HPE for I/O technology. In fact, HPE Synergy I/O is based on Marvell FastLinQ technology. Value-add features like storage offload for iSCSI and FCoE and network partitioning are key to enabling HPE to deliver composable network connectivity on their flagship blade solutions.

In addition to storage offload, Marvell provides HPE with unique features such as Universal RDMA and SmartAN® technology. Universal RDMA provides the HPE customer with the ability to run either RoCE RDMA or iWARP RDMA protocols on a single adapter. So, as their needs for implementing RDMA protocols change, there is no need to change adapters. SmartAN technology automatically configures the adapter ports for the proper 10GbE or 25GbE bandwidth, and – based on the type of switch the adapter is connected to and the physical cabling connection – adjusts the forward error correction settings. FastLinQ adapters also support a variety of other offloads including SR-IOV, DPDK and tunneling. This minimizes the impact I/O traffic management has on the host CPU, freeing up CPU resources to do more important work.

Our team of I/O experts stands ready to help you differentiate your solutions based on industry leading I/O technology and features for HPE servers. If you need help selecting the right I/O technology for your HPE customer, contact our field sales and application engineering experts using the Contacts link on our Marvell HPE Microsite.

May 1st, 2019

Revolutionizing Data Center Architectures for the New Era in Connected Intelligence

By George Hervey, Principal Architect, Marvell

Though established, mega-scale cloud data center architectures were adequately able to support global data demands for many years, there is a fundamental change taking place.  Emerging 5G, industrial automation, smart cities and autonomous cars are driving the need for data to be directly accessible at the network edge.   New architectures are needed in the data center to support these new requirements including reduced power consumption, low latency and smaller footprints, as well as composable infrastructure.

Composability provides a disaggregation of data storage resources to bring a more flexible and efficient platform for data center requirements to be met.  But it does, of course, need cutting-edge switch solutions to support it.  Capable of running at 12.8Tbps, the Marvell® Prestera® CX 8500 Ethernet switch portfolio has two key innovations that are set to redefine data center architectures: Forwarding Architecture using Slices of Terabit Ethernet Routers (FASTER) technology and Storage Aware Flow Engine (SAFE) technology.

With FASTER and SAFE technologies, the Marvell Prestera CX 8500 family can reduce overall network costs by more than 50%; lower power, space and latency; and determine exactly where congestion issues are occurring by providing complete per flow visibility.

View the video below to learn more about how Marvell Prestera CX 8500 devices represent a revolutionary approach to data center architectures.

 

 

August 3rd, 2018

IOPs and Latency

By admin,

Shared storage performance has significant impact on overall system performance. That’s why system administrators try to understand its performance and plan accordingly. Shared storage subsystems have three components: storage system software (host), storage network (switches and HBAs) and the storage array.

Storage performance can be measured at all three levels and aggregated to get to the subsystem performance. This can get quite complicated. Fortunately, storage performance can effectively be represented using two simple metrics: Input/Output operations per Second (IOPS) and Latency. Knowing these two values for a target workload, a user can optimize the performance of a storage system.

Let’s understand what these key factors are and how to use them to optimize of storage performance.

What is IOPS?
IOPS is a standard unit of measurement for the maximum number of reads and writes to a storage device for a given unit of time (e.g. seconds). IOPS represent the number of transactions that can be performed and not bytes of data. In order to calculate throughput, one would have to multiply the IOPS number by the block size used in the IO.

IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix.

What is a Latency?
Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system.

IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix.

What is a Latency?
Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system.

Find more about Marvell’s QLogic Fibre Channel adapter technology at:

https://www.marvell.com/fibre-channel-adapters-and-controllers/qlogic-fibre-channel-adapters/

April 2nd, 2018

Understanding Today’s Network Telemetry Requirements

By Tal Mizrahi, Feature Definition Architect, Marvell

There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate.

The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks.

A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context.

At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.

WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks

January 11th, 2018

Storing the World’s Data

By Marvell, PR Team

Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead.

Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too.

So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable – with cost and performance being important factors.

There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable.

Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies.

The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe.

Check out www.marvell.com to learn more about how Marvell is storing the world’s data.