We’re Building the Future of Data Infrastructure

Archive for the 'Ethernet Adapters and Controllers' Category

  • June 13, 2023

    FC-NVMe Goes Mainstream for Next-Generation Block Storage from HPE

    By Todd Owens, Field Marketing Director, Marvell

    While Fibre Channel (FC) has been around for a couple of decades now, the Fibre Channel industry continues to develop the technology in ways that keep it in the forefront of the data center for shared storage connectivity. Always a reliable technology, continued innovations in performance, security and manageability have made Fibre Channel I/O the go-to connectivity option for business-critical applications that leverage the most advanced shared storage arrays.

    A recent development that highlights the progress and significance of Fibre Channel is Hewlett Packard Enterprise’s (HPE) recent announcement of their latest offering in their Storage as a Service (SaaS) lineup with 32Gb Fibre Channel connectivity. HPE GreenLake for Block Storage MP powered by HPE Alletra Storage MP hardware features a next-generation platform connected to the storage area network (SAN) using either traditional SCSI-based FC or NVMe over FC connectivity. This innovative solution not only provides customers with highly scalable capabilities but also delivers cloud-like management, allowing HPE customers to consume block storage any way they desire – own and manage, outsource management, or consume on demand.HPE GreenLake for Block Storage powered by Alletra Storage MP

    At launch, HPE is providing FC connectivity for this storage system to the host servers and supporting both FC-SCSI and native FC-NVMe. HPE plans to provide additional connectivity options in the future, but the fact they prioritized FC connectivity speaks volumes of the customer demand for mature, reliable, and low latency FC technology.

  • June 02, 2021

    Breaking Digital Logjams with NVMe

    By Ian Sagan, Marvell Field Applications Engineer and Jacqueline Nguyen, Marvell Field Marketing Manager and Nick De Maria, Marvell Field Applications Engineer

    Have you ever been stuck in bumper-to-bumper traffic? Frustrated by long checkout lines at the grocery store? Trapped at the back of a crowded plane while late for a connecting flight?

    Such bottlenecks waste time, energy and money. And while today’s digital logjams might seem invisible or abstract by comparison, they are just as costly, multiplied by zettabytes of data struggling through billions of devices – a staggering volume of data that is only continuing to grow.

    Fortunately, emerging Non-Volatile Memory Express technology (NVMe) can clear many of these digital logjams almost instantaneously, empowering system administrators to deliver quantum leaps in efficiency, resulting in lower latency and better performance. To the end user this means avoiding the dreaded spinning icon and getting an immediate response.

  • January 29, 2021

    Full Steam Ahead! Marvell Ethernet Device Bridge Receives Avnu Certification

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Bergen, Sr. Product Marketing Manager, Automotive Business Unit, Marvell

    In the early decades of American railroad construction, competing companies laid their tracks at different widths. Such inconsistent standards drove inefficiencies, preventing the easy exchange of rolling stock from one railroad to the next, and impeding the infrastructure from coalescing into a unified national network. Only in the 1860s, when a national standard emerged – 4 feet, 8-1/2 inches – did railroads begin delivering their true, networked potential.

    Some one hundred-and-sixty years later, as Marvell and its competitors race to reinvent the world’s transportation networks, universal design standards are more important than ever. Recently, Marvell’s 88Q5050 Ethernet Device Bridge became the first of its type in the automotive industry to receive Avnu certification, meeting exacting new technical standards that facilitate the exchange of information between diverse in-car networks, which enable today’s data-dependent vehicles to operate smoothly, safely and reliably.

  • August 27, 2020

    How to Reap the Benefits of NVMe over Fabric in 2020

    By Todd Owens, Field Marketing Director, Marvell

    As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

    Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and -- offering 64,000 command queues with 64,000 commands per queue -- can provide much more scalability than other storage protocols.

  • August 19, 2020

    Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

    By Todd Owens, Field Marketing Director, Marvell

    Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

    Hewlett Packard Enterprise (HPE) recently updated its product naming protocol for the Ethernet adapters in its HPE ProLiant and HPE Apollo servers. Its new approach is to include the ASIC model vendor’s name in the HPE adapter’s product name. This commonsense approach eliminates the need for model number decoder rings on the part of Channel Partners and the HPE Field team and provides everyone with more visibility and clarity. This change also aligns more with the approach HPE has been taking with their “Open” adapters on HPE ProLiant Gen10 Plus servers. All of this is good news for everyone in the server sales ecosystem, including the end user. The products’ core SKU numbers remain the same, too, which is also good.

  • July 28, 2020

    Living on the Network Edge: Security

    By Alik Fishman, Director of Product Management, Marvell

    In our series Living on the Network Edge, we have looked at the trends driving Intelligence, Performance and Telemetry to the network edge. In this installment, let’s look at the changing role of network security and the ways integrating security capabilities in network access can assist in effectively streamlining policy enforcement, protection, and remediation across the infrastructure.

    Cybersecurity threats are now a daily struggle for businesses experiencing a huge increase in hacked and breached data from sources increasingly common in the workplace like mobile and IoT devices. Not only are the number of security breaches going up, they are also increasing in severity and duration, with the average lifecycle from breach to containment lasting nearly a year1 and presenting expensive operational challenges. With the digital transformation and emerging technology landscape (remote access, cloud-native models, proliferation of IoT devices, etc.) dramatically impacting networking architectures and operations, new security risks are introduced. To address this, enterprise infrastructure is on the verge of a remarkable change, elevating network intelligence, performance, visibility and security2.

  • May 13, 2019

    FastLinQ® NICs + RedHat SDN

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    A bit of validation once in a while is good for all of us - that’s pretty true whether you are the one providing it or, conversely, the one receiving it.  Most of the time it seems to be me that is giving out validation rather than getting it.  Like the other day when my wife tried on a new dress and asked me, “How do I look?”  Now, of course, we all know there is only one way to answer a question like that - if they want to avoid sleeping on the couch at least. 

    Recently, the Marvell team received some well-deserved validation for its efforts.  The FastLinQ 45000/41000 high performance Ethernet Network Interface Controllers (NICs) series that we supply to the industry, which support 10/25/50/100GbE operation, are now fully qualified by Red Hat for Fast Data Path (FDP) 19.B. The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

    Figure 1: The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

    Red Hat FDP is employed in an extensive array of the products found within the Red Hat portfolio - such as the Red Hat OpenStack Platform (RHOSP), as well as the Red Hat OpenShift Container Platform and Red Hat Virtualization (RHV).  Having FDP-qualification means that FastLinQ can now address a far broader scope of the open-source Software Defined Networking (SDN) use cases - including Open vSwitch (OVS), Open vSwitch with the Data Plane Development Kit (OVS-DPDK), Single Root Input/Output Virtualization (SR-IOV) and Network Functions Virtualization (NFV). Red hat logos The engineers at Marvell worked closely with our counterparts at Red Hat on this project, in order to ensure that the FastLinQ feature set would operate in conjunction with the FDP production channel. This involved many hours of complex, in-depth testing.  By being FDP 19.B qualified, Marvell FastLinQ Ethernet Adapters can enable seamless SDN deployments with RHOSP 14, RHEL 8.0, RHEV 4.3 and OpenShift 3.11. 

    Being widely recognized as the data networking ‘Swiss Army Knife,’ our FastLinQ 45000/41000 Ethernet adapters benefit from a highly flexible programmable architecture. This architecture is capable of delivering up to 68 million small packet per second performance levels, plus 240 SR-IOV virtual functions and supports tunneling while maintaining stateless offloads. As a result, customers have the hardware they need to seamlessly implement and manage even the most challenging of network workloads in what is becoming an increasingly virtualized landscape. Supporting Universal RDMA (concurrent RoCE, RoCEv2 and iWARP operation), unlike most competing NICs, they offer a highly scalable and flexible solution.  Learn more here

    SDN powered by FastLinQ NIC packet processing engine 

      Validation feels good. Thank you to the RedHat and Marvell team!

  • February 20, 2019

    NVMe/TCP - Simplicity is the Key to Innovation

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    Whether it is the aesthetics of the iPhone or a work of art like Monet’s ‘Water Lillies’, simplicity is often a very attractive trait. I hear this resonate in everyday examples from my own life - with my boss at work, whose mantra is “make it simple”, and my wife of 15 years telling my teenage daughter “beauty lies in simplicity”. For the record, both of these statements generally fall upon deaf ears. 

    The Non-Volatile Memory over PCIe Express (NVMe) technology that is now driving the progression of data storage is another place where the value of simplicity is starting to be recognized. In particular with the advent of the NVMe-over-Fabrics (NVMe-oF) topology that is just about to start seeing deployment. The simplest and most trusted of Ethernet fabrics, namely Transmission Control Protocol (TCP), has now been confirmed as an approved NVMe-oF standard by the NVMe Group[1].

    All the NVMe fabrics currently availableFigure 1: All the NVMe fabrics currently available

    Just to give a bit of background information here, NVMe basically enables the efficient utilization of flash-based Solid State Drives (SSDs) by accessing it over a high-speed interface, like PCIe, and using a streamlined command set that is specifically designed for flash implementations. Now, by definition, NVMe is limited to the confines of a single server, which presents a challenge when looking to scale out NVMe and access it from any element within the data center. This is where NVMe-oF comes in. All Flash Arrays (AFAs), Just a Bunch of Flash (JBOF) or Fabric-Attached Bunch of Flash (FBOF) and Software Defined Storage (SDS) architectures will each be able to incorporate a front end that has NVMe-oF connectivity as its foundation. As a result, the effectiveness with which servers, clients and applications are able to access external storage resources will be significantly enhanced. 

    A series of ‘fabrics’ have now emerged for scaling out NVMe. The first of these being Ethernet Remote Direct Memory Access (RDMA) - in both its RDMA over Converged Ethernet (RoCE) and Internet Wide-Area RDMA Protocol (iWARP) derivatives. It has been followed soon after by NVMe-over-Fiber Channel (FC-NVMe), and then ones based on FCoE, Infiniband and OmniPath. 

    But with so many fabric options already out there, why is it necessary to come up with another one? Do we really need NVMe-over-TCP (NVMe/TCP) too? Well RDMA (whether it is RoCE or iWARP) based NVMe fabrics are supposed to deliver the extremely low level latency that NVMe requires via a myriad of different technologies - like zero copy and kernel bypass - driven by specialized Network Interface Controller (NICs). However, there are several factors which hamper this, and these need to be taken into account.

    • Firstly, most of the earlier fabrics (like RoCE/iWARP) have no existing install base for storage networking to speak of (Fiber Channel is the only notable exception to this). For example, of the 12 million 10GbE+ NIC ports currently in operation within enterprise data centers, less than 5% have any RDMA capability (according to my quick back of the envelope calculations).
    • The most popular RDMA protocol (RoCE) mandates a lossless network on which to run (and this in turn requires highly skilled network engineers that command higher salaries). Even then, this protocol is prone to congestion problems, adding to further frustration.
    • Finally, and perhaps most telling, the two RDMA protocols (RoCE and iWARP) are mutually incompatible.

    Unlike any other NVMe fabric, the pervasiveness of TCP is huge - it is absolutely everywhere. TCP/IP is the fundamental foundation of the Internet, every single Ethernet NIC/network out there supports the TCP protocol. With TCP, availability and reliability are just not issues to that need to be worried about. Extending the scale of NVMe over a TCP fabric seems like the logical thing to do. 

    NVMe/TCP is fast (especially if using Marvell FastLinQ 10/25/50/100GbE NICs – as they have a build-in full offload for NVMe/TCP), it leverages existing infrastructure and keeps things inherently simple. That is beautiful prospect for any technologist and is also attractive to company CIOs worried about budgets too. 

    So, once again, simplicity wins in the long run! 

    [1] https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/

  • October 18, 2018

    Looking to Converge? HPE Launches Next Gen Marvell FastLinQ CNAs

    By Todd Owens, Field Marketing Director, Marvell

    Converging network and storage I/O onto a single wire can drive significant cost reductions in the small to mid-size data center by reducing the number of connections required. Fewer adapter ports means fewer cables, optics and switch ports consumed, all of which reduce OPEX in the data center. Customers can take advantage of converged I/O by deploying Converged Network Adapters (CNA) that provide not only networking connectivity, but also provide storage offloads for iSCSI and FCoE as well.

    Just recently, HPE has introduced two new CNAs based on Marvell® FastLinQ® 41000 Series technology. The HPE StoreFabric CN1200R 10GBASE-T Converged Network Adapter and HPE StoreFabric CN1300R 10/25Gb Converged Network Adapter are the latest additions in HPE’s CNA portfolio. These are the only HPE StoreFabric CNAs to also support Remote Direct Memory Access (RDMA) technology (concurrently with storage offloads).

    As we all know, the amount of data being generated continues to increase and that data needs to be stored somewhere. Recently, we are seeing an increase in the number of iSCSI connected storage devices in mid-market, branch and campus environments. iSCSI is great for these environments because it is easy to deploy, it can run on standard Ethernet, and there are a variety of new iSCSI storage offerings available, like Nimble and MSA all flash storage arrays (AFAs).

    One challenge with iSCSI is the load it puts on the Server CPU for storage traffic processing when using software initiators –  a common approach to storage connectivity. To combat this, Storage Administrators can turn to CNAs with full iSCSI protocol offload. Offloading transfers the burden of processing the storage I/O from the CPU to the adapter. Benefits of Adapter Offloads Figure 1: Benefits of Adapter Offloads 

    As Figure 1 shows, Marvell driven testing shows that CPU utilization using H/W offload in FastLinQ 10/25GbE adapters can reduce CPU utilization by as much as 50% compared to an Ethernet NIC with software initiators. This means less burden on the CPU, allowing you to add more virtual machines per server and potentially reducing the number of physical servers required. A small item like an intelligent I/O adapter from Marvell can provide a significant TCO savings.

    Another challenge has been the latency associated with Ethernet connectivity. This can now be addressed with RDMA technology. iWARP, RDMA over Converged Ethernet (RoCE) and iSCSI over Ethernet with RDMA (iSER) all allow for I/O transactions to be performed directly from the memory to the adapter, bypassing the software kernel in the user space of the O/S. This speeds transactions and reduces the overall I/O latency. The result is better performance and faster applications.

    The new HPE StoreFabric CNAs become the ideal devices for converging network and iSCSI storage traffic for HPE ProLiant and Apollo customers. The HPE StoreFabric CN1300R 10/25GbE CNA supports plenty of bandwidth that can be allocated to both the network and storage traffic. In addition, with support for Universal RDMA (support for both iWARP and RoCE) as well as iSER, this adapter provides significantly lower latency than standard network adapters for both the network and storage traffic. 

    The HPE StoreFabric 1300R also supports a technology Marvell calls SmartAN™, which allows the adapter to automatically configure itself when transitioning between 10GbE and 25GbE networks. This is key because at 25GbE speeds, Forward Error Correction (FEC) can be required, depending on the cabling used. To make things more complex, there are two different types of FEC that can be implemented. To eliminate all the complexity, SmartAN automatically configures the adapter to match the FEC, cabling and switch settings for either 10GbE or 25GbE connections, with no user intervention required.

    When budget is the key concern, the HPE StoreFabric CN1200R is the perfect choice. Supporting 10GBASE-T connectivity, this adapter connects to existing CAT6A copper cabling using RJ-45 connections. This eliminates the need for more expensive DAC cables or optical transceivers. The StoreFabric CN1200R also supports RDMA protocols (iWARP, RoCE and iSER) for lower overall latency.

    Why converge though? It’s all about a tradeoff between cost and performance. If we do the math to compare the cost of deploying separate LAN and storage networks versus a converged network, we can see that converging I/O greatly reduces the complexity of the infrastructure and can reduce acquisition costs by half. There are additional long-term cost savings also, associated with managing one network versus two. Eight Server Network Infrastructure Comparison Figure 2: Eight Server Network Infrastructure Comparison

    In this pricing scenario, we are looking at eight servers connecting to separate LAN and SAN environments versus connecting to a single converged environment as shown in figure 2.  Table 1: LAN/SAN versus Converged Infrastructure Price Comparison 

    The converged environment price is 55% lower than the separate network approach. The downside is the potential storage performance impact of moving from a Fibre Channel SAN in the separate network environment to a converged iSCSI environment. The iSCSI performance can be increased by implementing a lossless Ethernet environment using Data Center Bridging and Priority Flow Control along with RoCE RDMA. This does add significant networking complexity but will improve the iSCSI performance by reducing the number of interrupts for storage traffic.

    One additional scenario for these new adapters is in Hyper-Converged Infrastructure (HCI) implementations. With HCI, software defined storage is used. This means storage within the servers is shared across the network. Common implementations include Windows Storage Spaces Direct (S2D) and VMware vSAN Ready Node deployments. Both the HPE StoreFabric CN1200R BASE-T and CN1300R 10/25GbE CNAs are certified for use in either of these HCI implementations. FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node Figure 3: FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node 

    In summary, the new CNAs from the HPE StoreFabric group provide high performance, low cost connectivity for converged environments. With support for 10Gb and 25Gb Ethernet bandwidths, iWARP and RoCE RDMA and the ability to automatically negotiate changes between 10GbE and 25GbE connections with SmartAN™ technology, these are the ideal I/O connectivity options for small to mid-size server and storage networks.  To get the most out over your server investments, choose Marvell FastLinQ Ethernet I/O technology which is engineered from the start with performance, total cost of ownership, flexibility and scalability in mind. 

    For more information on converged networking, contact one our HPE experts in the field to talk through your requirements. Just use the HPE Contact Information link on our HPE Microsite at www.marvell.com/hpe.

  • August 03, 2018

    Infrastructure Powerhouse: Marvell and Cavium become one!

    By Todd Owens, Field Marketing Director, Marvell

    Marvell and Cavium

    Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell. Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

    infrastructure powerhouse

    For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today. 

    Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

       

    Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters 

    We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

      iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

    iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs 

    That sounds great, but what’s going to change over time?

    The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center. 

    Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

    Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

    In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.

  • May 02, 2018

    Cavium FastLinQ Ethernet Adapters Available for HPE Cloudline Servers

    By Todd Owens, Field Marketing Director, Marvell

    Are you considering deploying HPE Cloudline servers in your hyper-scale environment? If you are, be aware that HPE now offers select Cavium ™ FastLinQ® 10GbE and 10/25GbE Adapters as options for HPE Cloudline CL2100, CL2200 and CL3150 Gen 10 Servers. The adapters supported on the HPE Cloudline servers are shown in table 1 below.

    Table 1: Cavium FastLinQ 10GbE and 10/25GbE Adapters for HPE Cloudline Servers 

    As today’s hyper-scale environments grow, the Ethernet I/O needs go well beyond providing basic L2 NIC connectivity. Faster processors, increase in scale, high performance NVMe and SSD storage and the need for better performance and lower latency have started to shift some of the performance bottlenecks from servers and storage to the network itself. That means architects of these environments need to rethink connectivity options. 

    While HPE already have some good I/O offerings for Cloudline from other vendors, having Cavium FastLinQ adapters in the portfolio increases the I/O features and capabilities available. Advanced features like Universal RDMA, SmartAN™, DPDK, NPAR and SR-IOV from Cavium, allow architects to design more flexible and scalable hyper-scale environments. 

    Cavium’s advanced feature set provides offload technologies that shift the burden of managing the I/O from the O/S and CPU to the adapter itself. Some of the benefits of offloading I/O tasks include:

    • Lower CPU utilization to free up resources for applications or more VM scalability
    • Accelerate processing of small-packet I/O with DPDK
    • Save time by automating adapter connectivity between 10GbE and 25GbE
    • Reduced latency through direct memory access for I/O transactions to increase performance
    • Network isolation and QoS at the VM level to improve VM application performance
    • Reduce TCO with heterogeneous management

    Cavium FastLinQ Adapter and HPE Cloudline Gen10 Server

    To deliver these benefits, customers can take advantage of some or all the advanced features in the Cavium FastLinQ Ethernet adapters for HPE Cloudline. Here’s a list of some of the technologies available in these adapters.

    Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline

    * Source; Demartek findings 
    Table 2: Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline

    Network Partitioning (NPAR) virtualizes the physical port into eight virtual functions on the PCIe bus. This makes a dual port adapter appear to the host O/S as if it were eight individual NICs. Furthermore, the bandwidth of each virtual function can be fine-tuned in increments of 500Mbps, providing full Quality of Service on each connection. SR-IOV is an additional virtualization offload these adapters support that moves management of VM to VM traffic from the host hypervisor to the adapter. This frees up CPU resources and reduces VM to VM latency. 

    Remote Direct Memory Access (RDMA) is an offload that routes I/O traffic directly from the adapter to the host memory. This bypasses the O/S kernel and can improve performance by reducing latency. The Cavium adapters support what is called Universal RDMA, which is the ability to support both RoCEv2 and iWARP protocols concurrently. This provides network administrators more flexibility and choice for low latency solutions built with HPE Cloudline servers. 

    SmartAN is a Cavium technology available on the 10/25GbE adapters that addresses issues related to bandwidth matching and the need for Forward Error Correction (FEC) when switching between 10Gbe and 25GbE connections. For 25GbE connections, either Reed Solomon FEC (RS-FEC) or Fire Code FEC (FC-FEC) is required to correct bit errors that occur at higher bandwidths. For the details behind SmartAN technology you can refer to the Marvell technology brief here

    Support for Data Plane Developer Kit (DPDK) offloads accelerate the processing of small packet I/O transmissions. This is especially important for applications in the Telco NFV and high-frequency trading environments.

     For simplified management, Cavium provides a suite of utilities that allow for configuration and monitoring of the adapters that work across all the popular O/S environments including Microsoft Windows Server, VMware and Linux. Cavium’s unified management suite includes QCC GUI, CLI and v-Center plugins, as well as PowerShell Cmdlets for scripting configuration commands across multiple servers. Cavum’s unified management utilities can be downloaded from www.cavium.com . 

    Gen10 servers. Each of the Cavium adapters shown in table 1 support all of the capabilities noted above and are available in standup PCIe or OCP 2.0 form factors for use in the HPE Cloudline Gen10 Servers. One question you may have is how do these adapters compare to other offerings for Cloudline and those offered in HPE ProLiant servers? For that, we can look at the comparison chart here in table 3.

       

    Table 3: Comparison of I/O Features by Ethernet Supplier 

    Given that Cloudline is targeted for hyper-scale service provider customers with large and complex networks, the Cavium FastLinQ Ethernet adapters for HPE Cloudline offer administrators much more capability and flexibility than other I/O offerings. If you are considering HPE Cloudline servers, then you should also consider Cavium FastLinQ as your I/O of choice.

  • February 20, 2018

    If You're Not Using Intelligent I/O Adapters, You Should Be!

    By Todd Owens, Field Marketing Director, Marvell

    Like a kid in a candy store, choose I/O wisely. 

    Remember as a child, a quick stop to the convenience store, standing in front of the candy aisle your parents saying, “hurry and pick one.” But with so many choices, the decision was often confusing. With time running out, you’d usually just grab the name-brand candy you were familiar with. But what were you missing out on? Perhaps now you realize there were more delicious or healthy offerings you could have chosen. 

    I use this as an analogy to discuss the choice of I/O technology for use in server configurations. There are lots of choices and it takes time to understand all the differences. As a result, system architects in many cases just fall back to the legacy name-brand adapter they have become familiar with. Is this the best option for their client though? Not always. Here’s some reasons why. 

    Some of today’s Ethernet adapters provide added capabilities that I refer to as “Intelligent I/O”. These adapters utilize a variety of offload technology and other capabilities to take on tasks associated with I/O processing that are typically done in software by the CPU when using a basic “standard” Ethernet adapter. Intelligent offloads include things like SR-IOV, RDMA, iSCSI, FCoE or DPDK. Each of these offloads the work to the adapter and, in many cases, bypasses the O/S kernel, speeding up I/O transactions and increasing performance.

    As servers become more powerful and get packed with more virtual machines, running more applications, CPU utilizations of 70-80% are now commonplace. By using adapters with intelligent offloads, CPU utilization for I/O transactions can be reduced significantly, giving server administrators more CPU headroom. This means more CPU resources for applications or to increase the VM density per server.

    Another reason is to mitigate performance impact to the Spectre and Meltdown fixes required now for X86 server processors. The side channel vulnerability known as Spectre and Meltdown in X86 processors required kernel patches from the CPU vendor. These patches can have a significantly reduce CPU performance. For example, Red Hat reported the impact could be as much as a 19% performance degradation. That’s a big performance hit.

    Storage offloads and offloads like SR-IOV, RDMA and DPDK all bypass the O/S kernel. Because they bypass the kernel, the performance impacts of the Spectre and Meltdown fixes are bypassed as well. This means I/O transactions with intelligent I/O adapters are not impacted by these fixes, and I/O performance is maximized.

    offloads reduce impacts of Meltdown patches

    Finally, intelligent I/O can play a role in reducing cost and complexity and optimizing performance in virtual server environments. Some intelligent I/O adapters have port virtualization capabilities. Cavium Fibre Channel HBAs implement N-port ID Virtualization, or NPIV, to allow a single Fibre Channel port appear as multiple virtual Fibre Channel adapters to the hypervisor. For Cavium FastLinQ Ethernet Adapters, Network Partitioning, or NPAR, is utilized to provide similar capability for Ethernet connections. Up to eight independent connections can be presented to the host O/S making a single dual-port adapter look like 16 NICs to the operating system. Each virtual connection can be set to specific bandwidth and priority settings, providing full quality of service per connection.

    The advantage of this port virtualization capability is two-fold. First, the number of cables and connections to a server can be reduced. In the case of storage, four 8Gb Fibre Channel connections can be replaced by a single 32Gb Fibre Channel connection. For Ethernet, eight 1GbE connections can easily be replaced by a single 10GbE connection and two 10GbE connections can be replaced with a single 25GbE connection, with 20% additional bandwidth to spare.

    At HPE, there are more than fifty 10Gb-100GbE Ethernet adapters to choose from across the HPE ProLiant, Apollo, BladeSystem and HPE Synergy server portfolios. That’s a lot of documentation to read and compare. Cavium is proud to be a supplier of eighteen of these Ethernet adapters, and we’ve created a handy quick reference guide to highlight which of these offloads and virtualization features are supported on which adapters. View the complete Cavium HPE Ethernet Adapter Quick Reference guide here.

    For Fibre Channel HBAs, there are fewer choices (only nineteen), but we make a quick reference document available for our HBA offerings at HPE as well. You can view the Fibre Channel HBA Quick Reference here.

    In summary, when configuring HPE servers, think twice before selecting your I/O device. Choose an Intelligent I/O Adapter like those from HPE and Cavium. Cavium provides the broadest portfolio of intelligent Ethernet and Fibre Channel adapters for HPE Gen9 and Gen10 Servers and they support most, if not all, of the features mentioned in this blog. The best news is that the HPE/Cavium adapters are offered at the same or lower price than other products with fewer features. That means with HPE and Cavium I/O, you get more for less, and it just works too!

Archives