Marvell Blog

Featuring technology ideas and solutions worth sharing

Marvell

Latest Articles

June 10th, 2019

Marvell Teams with Deloitte to Give Back to the Community

By Stacey Keegan, Senior Director, Corporate Communications, Marvell

The Marvell Finance team, in partnership with Deloitte, has undergone a significant transformation over the past three years to ensure the delivery of error-free financial reporting and analytics for the business. After successfully completing another 10Q filing on June 7, the Marvell Finance team and Deloitte celebrated by giving back to the local community and volunteering at Salvation Army. Proceeds from the donations sold in Salvation Army Family Stores are used to fund the Salvation Army’s Substance Abuse program in downtown San Jose. The program provides housing, work preparedness and rehabilitation free of charge to registered participants.

This event demonstrates Marvell’s commitment to enriching the communities where we live and work. Well done, team.

June 6th, 2019

Marvell Supports GSA Women’s Leadership Initiative – Join Us!

By Regan MacPherson, Chief Compliance Officer

 

Women today comprise 47% of the overall workforce; however, only 15% choose engineering.  As part of Marvell’s commitment to diversity and inclusion in the workplace, the company is proud to support the GSA Women’s Leadership Initiative (WLI) to make an impact for women in STEM moving forward.

 

The GSA WLI seeks to significantly grow the number of women entering the semiconductor industry and increase the number of women on boards and in leadership positions.


As part of the initiative, which was announced yesterday, the GSA has established the WLI Council that will create and implement programs and projects towards meeting the WLI objectives.  WLI Council harnesses the leadership of women who have risen to the top ranks of the semiconductor industry.  Marvell’s own chief financial officer, Jean Hu, alongside 16 other women executives, will utilize their experiences to provide inspiration for and sponsorship of the next generation of female leaders.

 

“I am honored to be amongst a highly talented and diverse group of women at GSA WLI Council to help ensure that women are an integral part of the leadership of the semiconductor industry,” said Jean Hu, CFO of Marvell. “Marvell and GSA share a vision to elevate the women in STEM and support female entrepreneurs in their efforts to succeed in the tech industry.”

 

For more information on the GSA WLI, please visit https://www.gsaglobal.org/womens-leadership/. You can also join the Leadership group on LinkedIn to get involved.

 

 

May 13th, 2019

FastLinQ® NICs + RedHat SDN

By Nishant Lodha, Product Marketing & Technical Marketing Manager, Marvell

A bit of validation once in a while is good for all of us – that’s pretty true whether you are the one providing it or, conversely, the one receiving it.  Most of the time it seems to be me that is giving out validation rather than getting it.  Like the other day when my wife tried on a new dress and asked me, “How do I look?”  Now, of course, we all know there is only one way to answer a question like that – if they want to avoid sleeping on the couch at least.

Recently, the Marvell team received some well-deserved validation for its efforts.  The FastLinQ 45000/41000 high performance Ethernet Network Interface Controllers (NICs) series that we supply to the industry, which support 10/25/50/100GbE operation, are now fully qualified by Red Hat for Fast Data Path (FDP) 19.B.

Figure 1: The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

Red Hat FDP is employed in an extensive array of the products found within the Red Hat portfolio – such as the Red Hat OpenStack Platform (RHOSP), as well as the Red Hat OpenShift Container Platform and Red Hat Virtualization (RHV).  Having FDP-qualification means that FastLinQ can now address a far broader scope of the open-source Software Defined Networking (SDN) use cases – including Open vSwitch (OVS), Open vSwitch with the Data Plane Development Kit (OVS-DPDK), Single Root Input/Output Virtualization (SR-IOV) and Network Functions Virtualization (NFV).

The engineers at Marvell worked closely with our counterparts at Red Hat on this project, in order to ensure that the FastLinQ feature set would operate in conjunction with the FDP production channel. This involved many hours of complex, in-depth testing.  By being FDP 19.B qualified, Marvell FastLinQ Ethernet Adapters can enable seamless SDN deployments with RHOSP 14, RHEL 8.0, RHEV 4.3 and OpenShift 3.11.

Being widely recognized as the data networking ‘Swiss Army Knife,’ our FastLinQ 45000/41000 Ethernet adapters benefit from a highly flexible programmable architecture. This architecture is capable of delivering up to 68 million small packet per second performance levels, plus 240 SR-IOV virtual functions and supports tunneling while maintaining stateless offloads. As a result, customers have the hardware they need to seamlessly implement and manage even the most challenging of network workloads in what is becoming an increasingly virtualized landscape. Supporting Universal RDMA (concurrent RoCE, RoCEv2 and iWARP operation), unlike most competing NICs, they offer a highly scalable and flexible solution.  Learn more here.

 

Validation feels good. Thank you to the RedHat and Marvell team!

May 1st, 2019

Revolutionizing Data Center Architectures for the New Era in Connected Intelligence

By George Hervey, Principal Architect, Marvell

Though established, mega-scale cloud data center architectures were adequately able to support global data demands for many years, there is a fundamental change taking place.  Emerging 5G, industrial automation, smart cities and autonomous cars are driving the need for data to be directly accessible at the network edge.   New architectures are needed in the data center to support these new requirements including reduced power consumption, low latency and smaller footprints, as well as composable infrastructure.

Composability provides a disaggregation of data storage resources to bring a more flexible and efficient platform for data center requirements to be met.  But it does, of course, need cutting-edge switch solutions to support it.  Capable of running at 12.8Tbps, the Marvell® Prestera® CX 8500 Ethernet switch portfolio has two key innovations that are set to redefine data center architectures: Forwarding Architecture using Slices of Terabit Ethernet Routers (FASTER) technology and Storage Aware Flow Engine (SAFE) technology.

With FASTER and SAFE technologies, the Marvell Prestera CX 8500 family can reduce overall network costs by more than 50%; lower power, space and latency; and determine exactly where congestion issues are occurring by providing complete per flow visibility.

View the video below to learn more about how Marvell Prestera CX 8500 devices represent a revolutionary approach to data center architectures.

 

 

April 29th, 2019

RoCE or iWARP for Low Latency?

By Todd Owens, Technical Marketing Manager, Marvell

Today, Remote Direct Memory Access (RDMA) is primarily being utilized within high performance computing or cloud environments to reduce latency across the network.  Enterprise customers will soon require low latency networking that RDMA offers so that they can address a variety of different applications, such as Oracle and SAP, and also implement software-defined storage using Windows Storage Spaces Direct (S2D) or VMware vSAN.  There are three protocols that can be used in RDMA deployment: RDMA over InfiniBand, RDMA over Converged Ethernet (RoCE), and RDMA over iWARP.  Given that there are several possible routes to go down, how do you ensure you pick the right protocol for your specific tasks?

In the enterprise sector, Ethernet is by far the most popular transport technology.  Consequently, we can ignore the InfiniBand option, as it would require a forklift upgrade to the I/O existing infrastructure – thus making it way too costly for the vast majority of enterprise data centers.  So, that just leaves RoCE and iWARP.  Both can provide low latency connectivity over Ethernet networks.  But which is right for you?

Let’s start by looking at the fundamental differences between these two protocols.  RoCE is the most popular of the two and is already being used by many cloud hyper-scale customers worldwide.  RDMA enabled adapters running RoCE are available from a variety of vendors including Marvell.

RoCE provides latency at the adapter in the 1-5us range but requires a lossless Ethernet network to achieve low latency operation.  This means that the Ethernet switches integrated into the network must support data center bridging and priority flow control mechanisms so that lossless traffic is maintained.  It is likely they will therefore have to be reconfigured to use RoCE.  The challenge with the lossless or converged Ethernet environment is that configuration is a complex process and scalability can be very limited in a modern enterprise context.

Now it is not impossible to use RoCE at scale but to do so requires the implementation of additional traffic congestion control mechanisms, like Data Center Quantized Congestion Notification (DCQCN), which in turn calls for large, highly-experienced teams of network engineers and administrators.  Though this is something that hyper-scale customers have access to, not all enterprise customers can say the same.  Their human resources and financial budgets can be more limited.

Going back through the history of converged Ethernet environments, one must look no further than Fibre Channel over Converged Ethernet (FCoE) to see the size of the challenge involved.  Five years ago, many analysts and industry experts claimed FCoE would replace Fibre Channel in the data center.  That simply didn’t happen because of the complexity associated with using converged Ethernet networks at scale.  FCoE still survives, but only in closed environments like HPE BladeSystem or HPE Synergy servers, where the network properties and scale are carefully controlled.  These are single-hop environments with only a few connections in each system.

Finally, we come to iWARP.  This came on the scene after RoCE and has the advantage of running on today’s standard TCP/IP networks.  It provides latency at the adapter in the range of 10-15us.  This is higher than what one can achieve by implementing RoCE but is still orders of magnitude below that of standard Ethernet adapters.

They say, if all you have is a hammer, then everything looks like a nail.  That’s the same when it comes to vendors touting their RDMA-enabled adapters.  Most vendors only support one protocol, so of course that is the protocol they will recommend.  Here at Marvell, we are unique in that with our Universal RDMA technology, a customer can use both RoCE and iWARP on the same adapter.  This gives us more credibility when making recommendations and means that we are effectively protocol agnostic.  It is really important from a customer standpoint, as it means that we look at what is the best fit for their application criteria.

So which RDMA protocol do you use when?  Well, when latency is the number one criteria and scalability is not a concern, the choice should be RoCE.  You will see RoCE implemented as the back-end network in modern disk arrays, between the controller node and NVMe drives.  You will also find RoCE deployed within a rack or where there is only one or two top-of-rack switches and subnets to contend with.  Conversely, when latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate.  It runs on the existing network infrastructure and can easily scale between racks and even long distances across data centers.   A great use case for iWARP is as the network connectivity option for Microsoft Storage Spaces Direct implementations.

The good news for enterprise customers is that several Marvell® FastLinQ® Ethernet Adapters from HPE support Universal RDMA, so they can take advantage of low latency RDMA in the way that best suits them.  Here’s a list of HPE Ethernet adapters that currently support both RoCE and iWARP RDMA.

With RDMA-enabled adapters for HPE ProLiant, Apollo, HPE Synergy and HPE Cloudline servers, Marvell has a strong portfolio of 10Gb or 25GbE connectivity solutions for data centers.  In addition to supporting low latency RDMA, these adapters are also NVMe-ready.  This means they can accommodate NVMe over Ethernet fabrics running RoCE or iWARP, as well as supporting NVMe over TCP (with no RDMA).  They are a great choice for future-proofing the data center today for the workloads of tomorrow.

For more information on these and other Marvell I/O technologies for HPE, go to www.marvell.com/hpe.

If you’d like to talk with one of our I/O experts in the field, you’ll find contact info here.

March 6th, 2019

Composable Infrastructure: An Exciting New Prospect for Ethernet Switching

By George Hervey, Principal Architect, Marvell

The data center networking landscape is set to change dramatically.  More adaptive and operationally efficient composable infrastructure will soon start to see significant uptake, supplanting the traditional inflexible, siloed data center arrangements of the past and ultimately leading to universal adoption.

Composable infrastructure takes a modern software-defined approach to data center implementations.  This means that rather than having to build dedicated storage area networks (SANs), a more versatile architecture can be employed, through utilization of NMVe and NVMe-over-Fabric protocols.

Whereas previously data centers had separate resources for each key task, composable infrastructure enables compute, storage and networking capacity to be pooled together, with each function being accessible via a single unified fabric.  This brings far greater operational efficiency levels, with better allocation of available resources and less risk of over provisioning — critical as edge data centers are introduced to the network, offering solutions for different workload demands.

Composable infrastructure will be highly advantageous to the next wave of data center implementations though the increased degree of abstraction that comes along presents certain challenges — these are mainly in terms of dealing with acute network congestion — especially in relation to multiple host scenarios. Serious congestion issues can occur, for example, when there are several hosts attempting to retrieve data from a particular part of the storage resource simultaneously.  Such problems will be exacerbated in larger scale deployments, where there are several network layers that need to be considered and the degree of visibility is thus more restricted.

There is a pressing need for a more innovative approach to data center orchestration.  A major streamlining of the network architecture will be required to support the move to composable infrastructure, with fewer network layers involved, thereby enabling greater transparency and resulting in less congestion.

This new approach will simplify data center implementations, thus requiring less investment in expensive hardware, while at the same time offering greatly reduced latency levels and power consumption.

Further, the integration of advanced analytical mechanisms is certain to be of huge value here as well — helping with more effective network management and facilitating network diagnostic activities.  Storage and compute resources will be better allocated to where there is the greatest need. Stranded capacity will no longer be a heavy financial burden.

Through the application of a more optimized architecture, data centers will be able to fully embrace the migration to composable infrastructure.  Network managers will have a much better understanding of what is happening right down at the flow level, so that appropriate responses can be deployed in a timely manner.  Future investments will be directed to the right locations, optimizing system utilization.

February 20th, 2019

NVMe/TCP – Simplicity is the Key to Innovation

By Nishant Lodha, Product Marketing & Technical Marketing Manager, Marvell

Whether it is the aesthetics of the iPhone or a work of art like Monet’s ‘Water Lillies’, simplicity is often a very attractive trait. I hear this resonate in everyday examples from my own life – with my boss at work, whose mantra is “make it simple”, and my wife of 15 years telling my teenage daughter “beauty lies in simplicity”. For the record, both of these statements generally fall upon deaf ears.

The Non-Volatile Memory over PCIe Express (NVMe) technology that is now driving the progression of data storage is another place where the value of simplicity is starting to be recognized. In particular with the advent of the NVMe-over-Fabrics (NVMe-oF) topology that is just about to start seeing deployment. The simplest and most trusted of Ethernet fabrics, namely Transmission Control Protocol (TCP), has now been confirmed as an approved NVMe-oF standard by the NVMe Group[1].


Figure 1: All the NVMe fabrics currently available

Just to give a bit of background information here, NVMe basically enables the efficient utilization of flash-based Solid State Drives (SSDs) by accessing it over a high-speed interface, like PCIe, and using a streamlined command set that is specifically designed for flash implementations. Now, by definition, NVMe is limited to the confines of a single server, which presents a challenge when looking to scale out NVMe and access it from any element within the data center. This is where NVMe-oF comes in. All Flash Arrays (AFAs), Just a Bunch of Flash (JBOF) or Fabric-Attached Bunch of Flash (FBOF) and Software Defined Storage (SDS) architectures will each be able to incorporate a front end that has NVMe-oF connectivity as its foundation. As a result, the effectiveness with which servers, clients and applications are able to access external storage resources will be significantly enhanced.

A series of ‘fabrics’ have now emerged for scaling out NVMe. The first of these being Ethernet Remote Direct Memory Access (RDMA) – in both its RDMA over Converged Ethernet (RoCE) and Internet Wide-Area RDMA Protocol (iWARP) derivatives. It has been followed soon after by NVMe-over-Fiber Channel (FC-NVMe), and then ones based on FCoE, Infiniband and OmniPath.

But with so many fabric options already out there, why is it necessary to come up with another one? Do we really need NVMe-over-TCP (NVMe/TCP) too? Well RDMA (whether it is RoCE or iWARP) based NVMe fabrics are supposed to deliver the extremely low level latency that NVMe requires via a myriad of different technologies – like zero copy and kernel bypass – driven by specialized Network Interface Controller (NICs). However, there are several factors which hamper this, and these need to be taken into account.

  • Firstly, most of the earlier fabrics (like RoCE/iWARP) have no existing install base for storage networking to speak of (Fiber Channel is the only notable exception to this). For example, of the 12 million 10GbE+ NIC ports currently in operation within enterprise data centers, less than 5% have any RDMA capability (according to my quick back of the envelope calculations).
  • The most popular RDMA protocol (RoCE) mandates a lossless network on which to run (and this in turn requires highly skilled network engineers that command higher salaries). Even then, this protocol is prone to congestion problems, adding to further frustration.
  • Finally, and perhaps most telling, the two RDMA protocols (RoCE and iWARP) are mutually incompatible.

Unlike any other NVMe fabric, the pervasiveness of TCP is huge – it is absolutely everywhere. TCP/IP is the fundamental foundation of the Internet, every single Ethernet NIC/network out there supports the TCP protocol. With TCP, availability and reliability are just not issues to that need to be worried about. Extending the scale of NVMe over a TCP fabric seems like the logical thing to do.

NVMe/TCP is fast (especially if using Marvell FastLinQ 10/25/50/100GbE NICs – as they have a build-in full offload for NVMe/TCP), it leverages existing infrastructure and keeps things inherently simple. That is beautiful prospect for any technologist and is also attractive to company CIOs worried about budgets too.

So, once again, simplicity wins in the long run!

[1] https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/