We’re Building the Future of Data Infrastructure

Latest Marvell Blog Articles

  • July 11, 2024

    Bringing Payments to the Cloud with FIPS Certified LiquidSecurity®2 HSMs

    By Bill Hagerstrand

    Payment-specific Hardware Security Modules (HSMs)—dedicated server appliances for performing the security functions for credit card transactions and the like—have been around for decades and not much has changed with regards to form factor, custom APIs, “old-school” physical user interfaces via Key Loading Devices (KLDs) and smart cards. Payment-specific HSMs represent 40% of the overall HSM TAM (Total Available Market), according to ABI Research1. 

    The first HSM was built for the financial market back in the early 1970s. However, since then HSMs have become the de facto standard for more General-Purpose (GP) use cases like database encryption and PKI. This growth has made HSM usage for GP applications 60% of the overall HSM TAM. Unlike Payment HSMs, where most deployments are 1U server form factors, GP HSMs have migrated to 1U, PCIe card, USB, and now semiconductor chip form factors, to meet much broader use cases. 

    The typical HSM vendors that offer both Payment and GP HSMs have opted to split their fleet. They deploy Payment specific HSMs that are PCI PTS HSM certified for payments and GP HSMs that are NIST FIPS 140-2/3 certified. If you are a financial institution that’s government mandated to deploy a fleet of Payment HSMs for processing payment transactions, but also have a database with Personally Identifiable Information (PII) data that needs to be encrypted to meet General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA), you would also need to deploy a separate fleet of GP HSMs. This would include two separate HW, two separate SW, and two operational teams to manage each. Accordingly, the associated CapEx/OpEx spending is significant. 

    For Cloud Service Providers (CSPs), the hurdle was insurmountable and forced many to deploy dedicated bare metal 1U servers to offer payment services in the cloud. These same restrictions that were forced on financial institutions were now making their way to CSPs. Also, this deployment model is contrary to why CSPs have succeeded in the past, which was to offer when they offered competitively priced services as needed on shared resources. 

  • June 18, 2024

    Custom Compute in the AI Era

    This article is the final installment in a series of talks delivered Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI demands are pushing the limits of semiconductor technology, and hyperscale operators are at the forefront of adoption—they develop and deploy leading-edge technology that increases compute capacity. These large operators seek to optimize performance while simultaneously lowering total cost of ownership (TCO). With billions of dollars on the line, many have turned to custom silicon to meet their TCO and compute performance objectives.

    But building a custom compute solution is no small matter. Doing so requires a large IP portfolio, significant R&D scale and decades of experience to create the mix of ingredients that make up custom AI silicon. Today, Marvell is partnering with hyperscale operators to deliver custom compute silicon that’s enabling their AI growth trajectories.

    Why are hyperscale operators turning to custom compute?

    Hyperscale operators have always been focused on maximizing both performance and efficiency, but new demands from AI applications have amplified the pressure. According to Raghib Hussain, president of products and technologies at Marvell, “Every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. They are not only achieving the highest performance, but also saving billions of dollars.”

    With multiple business models in the cloud, including internal apps, infrastructure-as-a-service (IaaS), and software-as-a-service (SaaS)—the latter of which is the fastest-growing market thanks to generative AI—hyperscale operators are constantly seeking ways to improve their total cost of ownership. Custom compute allows them to do just that. Operators are first adopting custom compute platforms for their mass-scale internal applications, such as search and their own SaaS applications. Next up for greater custom adoption will be third-party SaaS and IaaS, where the operator offers their own custom compute as an alternative to merchant options.

    Progression of custom silicon adoption in hyperscale data centers.

    Progression of custom silicon adoption in hyperscale data centers.

  • June 12, 2024

    How AI Will Change the Building Blocks of Semis

    By Michael Kanellos, Head of Influencer Relations, Marvell

    Aaron Thean, points to a slide featuring the downtown skylines of New York, Singapore and San Francisco along with a prototype of a 3D processor and asks, “Which one of these things is not like the other?”

    The answer? While most gravitate to the processor, San Francisco is a better answer. With a population well under 1 million, the city’s internal transportation and communications systems don’t come close to the level of complexity, performance and synchronization required by the other three.

    With future chips, “we’re talking about trillions of transistors on multiple substrates,” said Thean, the deputy president of the National University of Singapore and the director of SHINE, an initiative to expand Singapore’s role in the development of chipets, during a one-day summit sponsored by Marvell and the university.

  • June 11, 2024

    How AI Will Drive Cloud Switch Innovation

    This article is part five in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI has fundamentally changed the network switching landscape. AI requirements are driving foundational shifts in the industry roadmap, expanding the use cases for cloud switching semiconductors and creating opportunities to redefine the terrain.

    Here’s how AI will drive cloud switching innovation.

    A changing network requires a change in scale

    In a modern cloud data center, the compute servers are connected to themselves and the internet through a network of high-bandwidth switches. The approach is like that of the internet itself, allowing operators to build a network of any size while mixing and matching products from various vendors to create a network architecture specific to their needs.

    Such a high-bandwidth switching network is critical for AI applications, and a higher-performing network can lead to a more profitable deployment.

    However, expanding and extending the general-purpose cloud network to AI isn’t quite as simple as just adding more building blocks. In the world of general-purpose computing, a single workload or more can fit on a single server CPU. In contrast, AI’s large datasets don’t fit on a single processor, whether it’s a CPU, GPU or other accelerated compute device (XPU), making it necessary to distribute the workload across multiple processors. These accelerated processors must function as a single computing element. 

    AI calls for enhanced cloud switch architecture

    AI requires accelerated infrastructure to split workloads across many processors.

  • June 06, 2024

    Silicon Photonics Comes of Age

    This article is part four in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    Silicon photonics—the technology of manufacturing the hundreds of components required for optical communications with CMOS processes—has been employed to produce coherent optical modules for metro and long-distance communications for years. The increasing bandwidth demands brought on by AI are now opening the door for silicon photonics to come inside data centers to enhance their economics and capabilities.  

    What’s inside an optical module?

    As the previous posts in this series noted, critical semiconductors like digital signal processors (DSPs), transimpedance amplifiers (TIAs) and drivers for producing optical modules have steadily improved in terms of performance and efficiency with each new generation of chips thanks to Moore’s Law and other factors.

    The same is not true for optics. Modulators, multiplexers, lenses, waveguides and other devices for managing light impulses have historically been delivered as discrete components.

    “Optics pretty much uses piece parts,” said Loi Nguyen, executive vice president and general manager of cloud optics at Marvell. “It is very hard to scale.”

    Lasers have been particularly challenging with module developers forced to choose between a wide variety of technologies. Electro-absorption-modulated (EML) lasers are currently the only commercially viable option capable of meeting the 200G per second speed necessary to support AI models. Often used for longer links, EML is the laser of choice for 1.6T optical modules. Not only is fab capacity for EML lasers constrained, but they are also incredibly expensive. Together, these factors make it difficult to scale at the rate needed for AI.

  • June 02, 2024

    A Deep Dive into the Copper and Optical Interconnects Weaving AI Clusters Together

    This article is part three in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024.

    Twenty-five years ago, network bandwidth ran at 100 Mbps, and it was aspirational to think about moving to 1 Gbps over optical. Today, links are running at 1 Tbps over optical, or 10,000 times faster than cutting edge speeds two decades ago.

    Another interesting fact. “Every single large language model today runs on compute clusters that are enabled by Marvell’s connectivity silicon,” said Achyut Shah, senior vice president and general manager of Connectivity at Marvell.

    To keep ahead of what customers need, Marvell continually seeks to boost capacity, speed, and performance of the digital signal processors (DSPs), transimpedance amplifiers or TIAs, drivers, firmware and other components inside interconnects. It’s an interdisciplinary endeavor involving expertise in high frequency analog, mixed signal, digital, firmware, software and other technologies. The following is a map to the different components and challenges shaping the future of interconnects and how that future will shape AI.

    Inside the Data Center

    From a high level, optical interconnects perform the task their name implies: they deliver data from one place to another while keeping errors from creeping in during transmission. Another important task, however, is enabling data center operators to scale quickly and reliably.

    “When our customers deploy networks, they don’t start deploying hundreds or thousands at a time,” said Shah. “They have these massive data center clusters—tens of thousands, hundreds of thousands and millions of (computing) units—that all need to work and come up at the exact same time. These are at multiple locations, across different data centers. The DSP helps ensure that they don’t have to fine tune every link by hand.”

    Optical Interconnect Module

     

  • May 23, 2024

    Scaling AI Means Scaling Interconnects

    This article is part two in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024.

    Interconnects have played a key role in enabling technology since the dawn of computing. During World War II, Alan Turing used the Turing machine to perform mathematical computations to break the Nazi’s code. This fast—at least at the time—computer used a massive parallel system and numerous interconnects. Eighty years later, interconnects play a similar role for AI—providing a foundation for massively parallel problems. However, with the growth of AI comes unique networking challenges—and Marvell is poised to meet the needs of this ever-growing market.

    What’s driving interconnect growth?
    Before 2023, the interconnect world was a different place. Interconnect speeds were driven by the pace of cloud data center server upgrades: the upgrades occurred every four years so the speed of interconnects doubled every four years at the same time. In 2023, generative AI took the interconnect wheel, and demand for AI is driving speeds to double every two years. And, while copper remains a viable technology for chip-to-chip and other short reach connections, optical is the dominant medium for AI.

    “Optical is the only technology that can give you the bandwidth and reach needed to connect hundreds and thousands and tens of thousands of servers across the whole data center,” said Dr. Loi Nguyen, executive vice president and general manager of Cloud Optics at Marvell. “No other technology can do the job—except optical.”

    AI doubles interconnect speed in half the time

  • May 14, 2024

    The AI Opportunity at Marvell

    Two trillion dollars. That’s the GDP of Italy. It’s the rough market capitalization of Amazon, of Alphabet and of Nvidia. And, according to analyst firm Dell’Oro, it’s the amount of AI infrastructure CAPEX expected to be invested by data center operators over the next five years. It’s an historically massive investment, which begs the question: Does the return on AI justify the cost?

    The answer is a resounding yes.

    AI is fundamentally changing the way we live and work. Beyond chatbots, search results, and process automation, companies are using AI to manage risk, engage customers, and speed time to market. New use cases are continuously emerging in manufacturing, healthcare, engineering, financial services, and more. We’re at the beginning of a generational inflection point that, according to McKinsey, has the potential to generate $4.4 trillion in annual economic value. 

    In that light, two trillion dollars makes sense. It will be financed through massive gains in productivity and efficiency.

    Our view at Marvell is that the AI opportunity before us is on par with that of the internet, the PC, and cloud computing. “We’re as well positioned as any company in technology to take advantage of this,” said chairman and CEO Matt Murphy at the recent Marvell Accelerated Infrastructure for the AI Era investor event in April 2024.

  • April 15, 2024

    Infosec Global and Marvell partner to provide Crypto Agility in the Cloud

    By Bill Hagerstrand, Director of Security Solutions at Marvell

    InfoSec Global, a leader in cryptographic agility management analytics software, and Marvell, a leader in Cloud based HSMs (Hardware Security Modules), have partnered to enable visibility and security in the cloud.

    The Marvell® LiquidSecurity® family is a solution of hardware security modules (HSMs) based on a PCIe form factor instead of traditional 1U and 2U pizza boxes They are purposely designed to enable CSPs (Cloud Service Providers) to offer security services in a cloud environment. Not only does the smaller form factor and optimized processing of LiquidSecurity provide a path to reduce the cost, overhead, and rack space needed for performing encryption and key management, partitions and others performance features enable clouds to serve a large number of customers in a flexible manner.

  • April 10, 2024

    HashiCorp and Marvell: Teaming Up for Multi-Cloud Security Management

    By Bill Hagerstrand, Director of Security Solutions at Marvell

    In this blog I describe how cybersecurity professionals can utilize Marvell® LiquidSecurity® HSMs with self-managed HashiCorp Vault Enterprise software, deployed on-prem and in the cloud.

    HashiCorp provides infrastructure automation software for multi-cloud environments, enabling enterprises to unlock a common cloud operating model to provision, secure, connect, and run any application on any infrastructure. HashiCorp Vault provides the foundation for modern multi-cloud security. It was purpose-built in the cloud era to authenticate and access different clouds, systems, and endpoints, and centrally store, access, and deploy secrets, i.e. encryption keys, passwords, API tokens, tokens used in applications, services, privileged accounts, or other sensitive portions of the IT ecosystem. It also provides a simple workflow to encrypt data in flight and at rest. Global organizations use Vault to solve security challenges as they adopt cloud and DevOps-friendly solutions.

  • April 10, 2024

    Cryptomathic and Marvell: Enhancing Crypto Agility for the Cloud

    By Bill Hagerstrand, Director of Security Solutions at Marvell

    Grab a cup of coffee. In this blog I describe how IT professionals can utilize Marvell® LiquidSecurity® Hardware Security Modules (HSMs) with Cryptomathic’s Crypto Service Gateway.

    Cryptomathic has 35+ years of experience providing global secure solutions to a variety of industries, including banking, government, technology manufacturing, cloud, and mobile. The company’s Crypto Service Gateway software is a first-of-its-kind central cryptographic platform that provides centralized and crypto-agile management of third party HSM hardware, enhancing the behavior of HSMs while improving the time-to-market of business applications.

    An HSM is a physically secure computing device that safeguards and manages digital keys, performs encryption/decryption functions, and provides strong authentication mechanisms. They typically come in two form factors: a PCIe card or a 1U network attached server. They are NIST (National Institute of Standards and Technology) FIPS 140-3, level-3 certified and most provide tamper evidence (visible signs of tampering), tamper resistance (the HSM becomes inoperable upon tampering) or tamper responsive (deletion of keys upon tamper detection). All provide logging and alerting features, strong authentication, and key management features, and support common APIs like PKCS#11. Applications that require cryptographic services will make API calls for keys used to encrypt data in motion or at rest.

  • April 09, 2024

    The Big, Hidden Problem with Encryption and How to Solve It

    By Bill Hagerstrand, Director of Security Solutions at Marvell

    Data encryption, invented nearly 50 years ago1, remains one of our most valuable tools for securing data.

    It is also woefully under-utilized.

    The most recent Entrust-Ponemon survey shows that 62% of enterprises have an encryption strategy in place, which is another way of saying 38% don’t2. (In 2021, it was 49%3.). The U.S. Department of Health and Human Services imposes millions in fines per year on healthcare organizations for improper snooping of medical records by employees, or health records accidentally released when a doctor’s laptop gets stolen. And, although 60% got hit with ransomware attacks this year, only 24% were able to thwart an attack by encrypting it before the hackers could.

    So, what’s the hang up?

    Inertia. For all of its effectiveness, encryption has historically been difficult and/or inconvenient to use (in part, of course, for the need to keep access tight.) It requires cooperation between both the sender and receiver and adds additional processing power and time. How many of your personal emails or messages do you encrypt? One of the most widespread and successful uses of encryption in the consumer world—encrypting the data for financial transactions on phones—has succeeded in part because it takes the encryption process out of the hands of consumers and makes it a back-end function. Back-end encryption functions, meanwhile, are also typically performed on a hardware security module (HSM), a 1U to 2U appliance that companies historically kept on premise and maintained on their own.

  • April 04, 2024

    Self-Destructing Encryption Keys and Static and Dynamic Entropy in One Chip

    By Eric Hunt-Schroeder, Senior Staff Manager, Digital IC Design, Marvell

    From ISSCC to GOMACTech, we’re presenting new and exciting technology solutions that will benefit the aerospace and defense industry. 

    For anyone who’s watched the 1990s cartoon, Inspector Gadget, the phrase “This message will self-destruct” is bound to be familiar. Anytime Inspector Gadget is given a new assignment, he receives a message containing critical information and instructions. Inspector Gadget must act quickly because shortly after opening, the message is promptly blown to smithereens. 

    Recently, I had the opportunity to present self-destructing encryption key technology within microchips during the Institute of Electrical and Electronics Engineers (IEEE) International Solid-State Circuits Conference (ISSCC) hosted in San Francisco from February 18 to 22. If the chip or keys become compromised, the keys self-destruct. Essentially, we’ve created security on the fly—the Inspector Gadget way. We received a significant amount of media attention from publications such as IEEE Spectrum and Tom’s Hardware

    But we didn’t stop with ISSCC. Shortly after, I had the opportunity to attend the GOMACTech conference where I presented our reconfigurable physically unclonable function (PUF) and random number generation technology—all in one chip. This technology promises to be critical for the aerospace and defense industry. 

  • April 02, 2024

    Dual Use IP: Shortening Government Development Cycles from Two Years to Six Months

    By Aidan Kelly, Senior Principal Engineer, Solutions Architect at Marvell Government Solutions

    Just like its civilian counterparts, the government uses semiconductors to enable all critical systems. Moreso than its civilian counterparts, the government uses semiconductors for system which can expose those semiconductors to extreme conditions and in addition have highly stringent requirements for security. With lives, safety, and national security on the line, the government can’t afford for these chips to fail.  

    As demand for chips that meet government specs increases, so do the costs associated with developing these highly technical and specific chips, particularly as the government works to integrate rapidly developing applications, such as artificial intelligence (AI). 

    But here’s the problem: chips that meet the government’s stringent specifications cannot be developed in a day, so by the time they complete what can be a long development and testing process, they may be eclipsed by newer technology.

    So how can the government get the advanced chips they need and get them quickly enough to keep up with ever-changing technology? 

  • March 25, 2024

    O-Band Coherent: An Idea Whose Time Is (Nearly) Here

    By Xi Wang, VP of Product Marketing of Optical Connectivity, Marvell

    Over the last 20 years, data rates for optical technology have climbed 1000x while power per bit has declined by 100x, a stunning trajectory that in many ways paved the way for the cloud, mobile Internet and streaming media.

    AI represents the next inflection point in bandwidth demand. Servers powered by AI accelerators and GPUs have far greater bandwidth needs than typical cloud servers: seven high-end GPUs alone can max out a switch that ordinarily can handle 500 cloud two-processor servers.  Just as important, demand for AI services, and higher-value AI services such as medical imaging or predictive maintenance, will further drive the need for more bandwidth. The AI market alone is expected to reach $407 billion by 2027. 

    O-band coherent or coherent lite—a technology that has been discussed for years at conferences but has yet to be deployed commercially in a meaningful way--will likely begin to percolate into the market over the next few years to help cloud service providers accommodate some of these challenges.

  • March 19, 2024

    How Optical Technology Will Save the Cloud

    By Radha Nagarajan, SVP and CTO of Optical Platforms, Marvell

    This article was first published by Photonics Spectra

    The cloud. It evokes an ethereal, weightless environment where problems get whisked away by a breeze.

    In reality, the cloud consists of massive industrial buildings containing millions of dollars’ worth of equipment spread over thousands, and increasingly millions, of square feet. In Arizona, some communities are complaining that cloud data centers are draining their aquifers and consuming far more water than expected1 while in the UK and Ireland the power requirements of data centers are crimping needed housing development. Even in regions like Northern Virginia where the local economies are tightly bound to data centers, conflicts between residents and the cloud are emerging.

    With the rise of AI, these conflicts will escalate. AI models and data sets are growing exponentially in size2 and developers are contemplating clusters with 32,000 GPUs, 2,000 switches, 4,000 servers and 74,000 optical modules3. Such a system might require 45MW of power capacity, or nearly 5x the peak load of the Empire State Building. This resource-intensiveness also shows how AI services could become an economic high wire act for many.

    Performance up, Power Down: Over 20 years, the data rate of optical modules has increased by 1000x while power per bit has decreased by 100x.

  • February 22, 2024

    Marvell High-Speed Optical Connectivity Solutions Achieve 2024 Lightwave Innovation Honors

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

    Marvell is excited to announce that three of its high-speed optical connectivity solutions have been distinguished among the best in the industry by the 2024 Lightwave Innovation Reviews. The three awards validate Marvell leadership in PAM4 DSP, coherent DSP and data center interconnect (DCI) modules for accelerated infrastructure.

    A panel of judges, comprised of experts from the optical communications community, awarded Marvell with the highest score possible of 5.0 for both its Nova 1.6 Tbps PAM4 electro-optics platform and COLORZ® 800 ZR/ZR+ pluggable module, and an outstanding 4.5 honoree status for its Orion 800 Gbps coherent DSP. The honors reflect the industry’s recognition of Marvell leading-edge technologies to address the growing bandwidth and connectivity needs of artificial intelligence (AI), cloud data center and carrier networks.

    Lightwave Editor In Chief Sean Buckley expressed his congratulations, stating, “On behalf of the Lightwave Innovation Reviews, I would like to congratulate Marvell on achieving a well-deserved level honoree status. This competitive program enables Lightwave to showcase and applaud the most innovative products, projects, technologies, and programs that significantly impact the industry.”

  • January 25, 2024

    How PCIe Interconnect is Critical for the Emerging AI Era

    By Annie Liao, Product Management Director, Connectivity, Marvell

    PCIe has historically been used as protocol for communication between CPU and computer subsystems. It has gradually increased speed since its debut in 2003 (PCI Express) and after 20 years of PCIe development, we are currently at PCIe Gen 5 with I/O bandwidth of 32Gbps per lane. There are many factors driving the PCIe speed increase. The most prominent ones are artificial intelligence (AI) and machine learning (ML). In order for CPU and AI Accelerators/GPUs to effectively work with each other for larger training models, the communication bandwidth of the PCIe-based interconnects between them needs to scale to keep up with the exponentially increasing size of parameters and data sets used in AI models. As the number of PCIe lanes supported increases with each generation, the physical constraints of the package beachfront and PCB routing put a limit to the maximum number of lanes in a system. This leaves I/O speed increase as the only way to push more data transactions per second. The compute interconnect bandwidth demand fueled by AI and ML is driving a faster transition to the next generation of PCIe, which is PCIe Gen 6.

    PCIe has been using 2-level Non-Return-to-Zero (NRZ) modulation since its inception. Increasing PCIe speed up to Gen 5 has been achieved through doubling of the I/O speed. For Gen 6, PCI-SIG decided to adopt Pulse-Amplitude Modulation 4 (PAM4), which carries 4-level signal encoding 2 bits of data (00, 01, 10, 11). The reduced margin resulting from the transition of 2-level signaling to 4-level signaling has also necessitated the use of Forward Error Correction (FEC) protection, a first for PCIe links. With the adoptions of PAM4 signaling and FEC, Gen 6 marks an inflection point for PCIe both from signaling and protocol layer perspectives. 

    In addition to AI/ML, disaggregation of memory and storage is an emerging trend in compute applications that has a significant impact in the applications of PCIe based interconnect. PCIe has historically been adopted on-board and for in-chassis interconnects. Attaching more front-facing NVMe SSDs is one of the common PCIe interconnect examples. With the increasing trends toward flexible resource allocation, and the advancement of CXL technology, the server industry is now moving toward disaggregated and composable infrastructure. In this disaggregated architecture, the PCIe end points are located at different chassis away from the PCIe root complex, requiring the PCIe link to travel out of the system chassis. This is typically achieved through direct attach cables (DAC) that can range up to 3-5m.

  • December 18, 2023

    Sustainability as an Opportunity for Semiconductor Product Innovation

    By Alua Suleimenova, ESG Program Manager, Marvell

    The annual United Nations Climate Change Conference COP28 taking place this year in Dubai has brought technology under the sustainability spotlight. In an era of rapid technological advancement, digitalization and proliferation of AI, environmental impacts of technology cannot be overlooked. The semiconductor industry is now at a unique juncture: the global demand for semiconductor products is growing rapidly alongside increasing pressure to reduce greenhouse gas (GHG) emissions.

    Reducing emissions from semiconductor product manufacturing remains the most effective and preferred response measure to climate change for many companies. At the same time, addressing upstream climate impacts alone would not represent a comprehensive picture. Semiconductor companies are increasingly embracing sustainable product design and prioritize power reduction both intrinsically in the products themselves and extrinsically - by collaborating with their customers around energy efficiency in the data infrastructure systems in which the semiconductor products are deployed.

    Marvell Circuit Board

  • November 05, 2023

    Fibre Channel: The #1 Choice for Mission-Critical Shared-Storage Connectivity

    By Todd Owens, Director, Field Marketing, Marvell

    Here at Marvell, we talk frequently to our customers and end users about I/O technology and connectivity. This includes presentations on I/O connectivity at various industry events and delivering training to our OEMs and their channel partners. Often, when discussing the latest innovations in Fibre Channel, audience questions will center around how relevant Fibre Channel (FC) technology is in today’s enterprise data center. This is understandable as there are many in the industry who have been proclaiming the demise of Fibre Channel for several years. However, these claims are often very misguided due to a lack of understanding about the key attributes of FC technology that continue to make it the gold standard for use in mission-critical application environments.

    From inception several decades ago, and still today, FC technology is designed to do one thing, and one thing only: provide secure, high-performance, and high-reliability server-to-storage connectivity. While the Fibre Channel industry is made up of a select few vendors, the industry has continued to invest and innovate around how FC products are designed and deployed. This isn’t just limited to doubling bandwidth every couple of years but also includes innovations that improve reliability, manageability, and security. 

  • October 19, 2023

    Shining a Light on Marvell Optical Technology and Innovation in the AI Era

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

    The sheer volume of data traffic moving across networks daily is mind-boggling almost any way you look at it. During the past decade, global internet traffic grew by approximately 20x, according to the International Energy Agency. One contributing factor to this growth is the popularity of mobile devices and applications: Smartphone users spend an average of 5 hours a day, or nearly 1/3 of their time awake, on their devices, up from three hours just a few years ago. The result is incredible amounts of data in the cloud that need to be processed and moved. Around 70% of data traffic is east-west traffic, or the data traffic inside data centers. Generative AI, and the exponential growth in the size of data sets needed to feed AI, will invariably continue to push the curb upward.

    Yet, for more than a decade, total power consumption has stayed relatively flat thanks to innovations in storage, processing, networking and optical technology for data infrastructure. The debut of PAM4 digital signal processors (DSPs) for accelerating traffic inside data centers and coherent DSPs for pluggable modules have played a large, but often quiet, role in paving the way for growth while reducing cost and power per bit.

    Marvell at ECOC 2023

    At Marvell, we’ve been gratified to see these technologies get more attention. At the recent European Conference on Optical Communication, Dr. Loi Nguyen, EVP and GM of Optical at Marvell, talked with Lightwave editor in chief, Sean Buckley, on how Marvell 800 Gbps and 1.6 Tbps technologies will enable AI to scale.   

  • October 18, 2023

    An Extreme Makeover for Data Centers

    By Dr. Radha Nagarajan, Senior Vice President and Chief Technology Officer, Optical and Cloud Connectivity Group, Marvell

    This article was originally published in Data Center Knowledge

    People or servers? 

    Communities around the world are debating this question as they try to balance the plans of service providers and the concerns of residents.  

    Last year, the Greater London Authority told real estate developers that new housing projects in West London may not be able to go forward until 2035 because data centers have taken all of the excess grid capacity1EirGrid2 said it won’t accept new data center applications until 2028. Beijing3 and Amsterdam have placed strict limits on new facilities. Cities in the southwest and elsewhere4, meanwhile, are increasingly worried about water consumption as mega-sized data centers can use over 1 million gallons a day5.  

    When you add in the additional computing cycles needed for AI and applications like ChatGPT, the outline of the conflict becomes more heated.  

    On the other hand, we know we can’t live without them. Modern society, with remote work, digital streaming and modern communications all depend on data centers. Data centers are also one of sustainability’s biggest success stories. Although workloads grew by approximately 10x in the last decade with the rise of SaaS and streaming, total power consumption stayed almost flat at around 1% to 1.5%6 of worldwide electricity thanks to technology advances, workload consolidation, and new facility designs. Try and name another industry that increased output by 10x with a relatively fixed energy diet? 

  • October 02, 2023

    Is IP over WDM finally here?

    By Loi Nguyen, Executive Vice President, Cloud Optics Business Group, Marvell

    Some twenty years ago the concept of IP over Wavelength Division Multiplexing (WDM) was proposed as a way to simplify the optical infrastructures. In this vision, all optical networks are connected via point-to-point mesh networks with a router at the center. The concept was elegant, but never took off because the optical technology at the time was not able to keep up with the faster innovation cycle of CMOS, driven by Moore’s law. The larger form factor of WDM optics does not allow them to be directly plugged into a router port. Adopting a larger form factor on the router in order to implement IP over WDM in a massive scale would be prohibitively expensive.

    For routers to interface with the networks, a “transponder” is needed, which is connected to a router via short-reach optics on one side and WDM optics to the network on the other. The market for transponders grew quickly to become a multi-billion-dollar market.

    A Star is Born

    About 10 years ago, I was building a team at Inphi, where I was a co-founder, to further develop a nascent technology called silicon photonics. SiPho, as it’s called, leverages commercial CMOS foundries to develop photonics integrated circuits (PIC) that integrate hundreds of components ranging from high-speed modulators and detectors to passive devices such as couplers, waveguides, monitoring diodes, attenuators and so on. We were looking for ideas and customers to bring silicon photonics to the marketplace.

    Fortunately, good technology and market need found one another. A group of Microsoft executives had been considering IP over WDM to launch a new concept of “distributed data centers,” in which multiple data centers in a region are connected by high speed WDM optics using the same form factor as shorter reach “client optics” used in switches and routers. By chance, we met at ECOC 2013 in London for the initial discussion, and then some months later, a product that enabled IP over WDM at cloud scale was born.

  • September 22, 2023

    Product security is paramount to us: A response to recent Cavium product security concerns 

    By Raghib Hussain, President, Products and Technologies, Marvell

    To our Valued Customers:

    Recently, reports have surfaced alleging that certain Cavium products included a “backdoor” for the National Security Agency (NSA). We assure you that neither Cavium nor Marvell have ever knowingly incorporated or retained any vulnerability or backdoor in our products.

    Our products implement a suite of standards-based security algorithms like AES, 3DES, SHA etc. Prior to 2014, some of our software libraries included an algorithm for random number generation called Dual_EC_DRGB. This algorithm was one of four officially recommended at the time by the US National Institute for Standards and Technology (NIST) that our products implemented. In 2013, this algorithm was reported by the New York Times, The Guardian, and ProPublica to include a backdoor for the NSA. After we learned of the potential issue, Cavium removed this algorithm from its software libraries and has not included it in any product shipped since then. 

    Importantly, the Dual_EC_DRGB algorithm was included in some of Cavium’s software libraries for our chip-level products, but not in the chips themselves.  As a result, while Cavium provided this algorithm (among many), the ultimate choice and control over the algorithms being used was managed by the equipment vendors integrating our products into their system level products. Many companies, not just Cavium, implemented the NIST standard algorithms including this algorithm. In fact, according to NIST’s historical validation data, approximately 80 different products with semiconductors from different vendors implemented this algorithm in some combination of hardware, software, and firmware before it was removed.

  • September 11, 2023

    Automotive Central Switches: The Latest Step in the Evolution of Cars

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

    When you hear people refer to cars as “data centers on wheels,” they’re usually thinking about how an individual experiences enhanced digital capabilities in a car, such as streaming media on-demand or new software-defined services for enhancing the driving experience.

    But there’s an important implication lurking behind the statement. For cars to take on tasks that require data center-like versatility, they need to be built like data centers. Automakers in conjunction with hardware makers and software developers are going to have to develop a portfolio of highly specialized technologies that work together, based around similar architectural concepts, to deliver the capabilities needed for the software-defined vehicle while at the same time keeping power and cost to a minimum. It’s not an easy balancing act.

    Which brings us to the emergence of a new category of products for the zonal architecture, specifically zonal and the associated automotive central Ethernet switches. Today’s car networks are built around domain localized networks: speakers, video screens and other infotainment devices link to the infotainment ECU, while powertrain and brakes are part of the body domain, and ADAS domain is based on the sensors and high-performance processors. Bandwidth and security can be form-fitted to the application.

  • September 05, 2023

    800G: An Inflection Point for Optical Networks

    By Samuel Liu, Senior Director, Product Line Management, Marvell

    Digital technology has what you could call a real estate problem. Hyperscale data centers now regularly exceed 100,000 square feet in size. Cloud service providers plan to build 50 to 100 edge data centers a year and distributed applications like ChatGPT are further fueling a growth of data traffic between facilities. Similarly, this explosive surge in traffic also means telecommunications carriers need to upgrade their wired and wireless networks, a complex and costly undertaking that will involve new equipment deployment in cities all over the world.

    Weaving all of these geographically dispersed facilities into a fast, efficient, scalable and economical infrastructure is now one of the dominant issues for our industry.

    Pluggable modules based on coherent digital signal processors (CDSPs) debuted in the last decade to replace transponders and other equipment used to generate DWDM compatible optical signals. These initial modular products didn’t offer the same performance as the incumbent solutions, and could only be deployed in limited use cases. These early modules, with their large form factors, had performance limitations and did not support the required high-density data transmission. Over time, advances in technology optimized the performance of pluggable modules, and CDSP speeds grew from 100 to 200 and 400 Gbps. Continued innovation, and the development of an open ecosystem, helped expand the potential applications.

  • August 17, 2023

    Marvell Bravera SC5 SSD Controller Named Winner at FMS 2023 Best of Show Awards

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

     

    Marvell and Memblaze were honored with the “Most Innovative Customer Implementation” award at the Flash Memory Summit (FMS), the industry’s largest conference featuring flash memory and other high-speed memory technologies, last week.
    Powered by the Marvell® Bravera™ SC5 controller, Memblaze developed the PBlaze 7 7940 GEN5 SSD family, delivering an impressive 2.5 times the performance and 1.5 times the power efficiency compared to conventional PCIe 4.0 SSDs and ~55/9us read/write latency1. This makes the SSD ideal for business-critical applications and high-performance workloads like machine learning and cloud computing. In addition, Memblaze utilized the innovative sustainability features of Marvell’s Bravera SC5 controllers for greater resource efficiency, reduced environmental impact and streamlined development efforts and inventory management.

  • July 19, 2023

    Marvell Joins Imec Automotive Chiplet Initiative to Facilitate Compute SoCs for Super-human Sensing

    By Willard Tu, Associate VP, Product Marketing – Automotive Compute, Marvell

    Marvell is excited to announce that we’ve joined the automotive chiplet initiative coordinated by imec, a world-leading research and innovation hub in nanoelectronics and digital technologies. Imec has formed an informal ecosystem of leading companies from multiple automotive industry segments to address the challenge of bringing multi-chiplet compute modules to the automotive market.

    The goal of imec’s automotive chiplet initiative is to address the design challenges that arise from ever-increasing data movement, processing, storage and security requirements. These demands complicate the automotive manufacturers’ desire for scalable performance to address different vehicle classes, while reducing costs and development time and ensuring consistent quality, reliability and safety.

    And these demands will be made even more intense by the coming era of super-human sensing. The fusion of data from multi-spectral cameras (visible and infrared), radar and LiDAR will enable “vision” beyond human capability. Such sensor fusion will be a critical requirement for safe autonomous driving.

     

    Super-human Sensing needds massive proessing
    Source: imec presentation at ITF World, May 2023

  • June 27, 2023

    Scaling AI Infrastructure with High-Speed Optical Connectivity

    By Suhas Nayak, Senior Director of Solutions Marketing, Marvell

     

    In the world of artificial intelligence (AI), where compute performance often steals the spotlight, there's an unsung hero working tirelessly behind the scenes. It's something that connects the dots and propels AI platforms to new frontiers. Welcome to the realm of optical connectivity, where data transfer becomes lightning-fast and AI's true potential is unleashed. But wait, before you dismiss the idea of optical connectivity as just another technical detail, let's pause and reflect. Think about it: every breakthrough in AI, every mind-bending innovation, is built on the shoulders of data—massive amounts of it. And to keep up with the insatiable appetite of AI workloads, we need more than just raw compute power. We need a seamless, high-speed highway that allows data to flow freely, powering AI platforms to conquer new challenges. 

    In this post, I’ll explain the importance of optical connectivity, particularly the role of DSP-based optical connectivity, in driving scalable AI platforms in the cloud. So, buckle up, get ready to embark on a journey where we unlock the true power of AI together. 

  • June 13, 2023

    FC-NVMe Goes Mainstream for Next-Generation Block Storage from HPE

    By Todd Owens, Field Marketing Director, Marvell

    While Fibre Channel (FC) has been around for a couple of decades now, the Fibre Channel industry continues to develop the technology in ways that keep it in the forefront of the data center for shared storage connectivity. Always a reliable technology, continued innovations in performance, security and manageability have made Fibre Channel I/O the go-to connectivity option for business-critical applications that leverage the most advanced shared storage arrays.

    A recent development that highlights the progress and significance of Fibre Channel is Hewlett Packard Enterprise’s (HPE) recent announcement of their latest offering in their Storage as a Service (SaaS) lineup with 32Gb Fibre Channel connectivity. HPE GreenLake for Block Storage MP powered by HPE Alletra Storage MP hardware features a next-generation platform connected to the storage area network (SAN) using either traditional SCSI-based FC or NVMe over FC connectivity. This innovative solution not only provides customers with highly scalable capabilities but also delivers cloud-like management, allowing HPE customers to consume block storage any way they desire – own and manage, outsource management, or consume on demand.HPE GreenLake for Block Storage powered by Alletra Storage MP

    At launch, HPE is providing FC connectivity for this storage system to the host servers and supporting both FC-SCSI and native FC-NVMe. HPE plans to provide additional connectivity options in the future, but the fact they prioritized FC connectivity speaks volumes of the customer demand for mature, reliable, and low latency FC technology.

  • June 12, 2023

    AI and the Tectonic Shift Coming to Data Infrastructure

    By Michael Kanellos, Head of Influencer Relations, Marvell

    AI’s growth is unprecedented from any angle you look at it. The size of large training models is growing 10x per year. ChatGPT’s 173 million plus users are turning to the website an estimated 60 million times a day (compared to zero the year before.). And daily, people are coming up with new applications and use cases. 

    As a result, cloud service providers and others will have to transform their infrastructures in similarly dramatic ways to keep up, says Chris Koopmans, Chief Operations Officer at Marvell in conversation with Futurum’s Daniel Newman during the Six Five Summit on June 8, 2023. 

    “We are at the beginning of at least a decade-long trend and a tectonic shift in how data centers are architected and how data centers are built,” he said.  

    The transformation is already underway. AI training, and a growing percentage of cloud-based inference, has already shifted from running on two-socket servers based around general processors to systems containing eight more GPUs or TPUs optimized to solve a smaller set of problems more quickly and efficiently.  

  • May 22, 2023

    Are We Ready for Large-scale AI Workloads?

    By Noam Mizrahi, Executive Vice President, Chief Technology Officer, Marvell

    Originally published in Embedded

    ChatGPT has fired the world’s imagination about AI. The chatbot can write essays, compose music, and even converse in different languages. If you’ve read any ChatGPT poetry, you can see it doesn’t pass the Turing Test yet, but it’s a huge leap forward from what even experts expected from AI just three months ago. Over one million people became users in the first five days, shattering records for technology adoption.

    The groundswell also strengthens arguments that AI will have an outsized impact on how we live—with some predicting AI will contribute significantly to global GDP by 2030 by fine-tuning manufacturing, retail, healthcare, financial systems, security, and other daily processes.

    But the sudden success also shines light on AI’s most urgent problem: our computing infrastructure isn’t built to handle the workloads AI will throw at it. The size of AI networks grew by 10x per year over the last 5 years. By 2027 one in five Ethernet switch ports in data centers will be dedicated to AI, ML and accelerated computing.

  • May 08, 2023

    Marvell Recognized as 2023 Bay Area Best Place to Work

    By Liz Du, Director, Marvell

    Great news! Marvell is excited to share that the company has been recognized in the Best Places to Work 2023 largest company category at the annual awards program produced by the San Francisco Business Times and Silicon Valley Business Journal. Marvell’s inclusion on the annual list was determined by survey results provided voluntarily by company employees. 

    “This recognition reflects our focus on cultivating and continuously nurturing a culture where wellness and inclusivity are prioritized,” said Janice Hall, executive vice president, chief human resources officer at Marvell. “We have an incredible, diverse team at Marvell and I’m very proud of our commitment to creating an environment where everyone is supported to do work that matters, while given the opportunity to take ownership of their careers.”

  • April 17, 2023

    Marvell Celebrates Earth Week

    By Rebecca O'Neill, Global Head of ESG, Marvell

    Marvell is committed to being a good steward of the environment and we are excited to mark the 2023 Earth Day on April 22 with a week-long celebration. Our employees care about protecting our planet and want to do their part both at work and beyond.  

    We look forward to engaging our employees in virtual and in-person events throughout the week to enable them to better protect the environment, such as:

    The Big 6 Sustainability Webinar:

    We are kicking off the week with a global webinar with special guest speakers from Carbonauts, a company that teaches people how to reduce their carbon footprints and integrate sustainability into their day-to-day lives. Employees will learn the six biggest levers for reducing their carbon footprints. Carbonauts is a frequent guest at sustainability conferences and podcasts and the company’s clients include many Fortune 500 companies like Amazon, Chanel, Toyota, AT&T, Warner Brothers, and Netflix, to name a few. Carbonauts is on a mission to get society to the tipping point of behavior change to make sustainable living a way of life for all.

  • March 23, 2023

    How Secure is Your 5G Network?

    By Bill Hagerstrand, Security Solutions BU, Marvell

    New Challenges and Solutions in an Open, Disaggregated Cloud-Native World

    Time to grab a cup of coffee, as I describe how the transition towards open, disaggregated, and virtualized networks – also known as cloud-native 5G – has created new challenges in an already-heightened 4G-5G security environment.

    5G networks move, process and store an ever-increasing amount of sensitive data as a result of faster connection speeds, mission-critical nature of new enterprise, industrial and edge computing/AI applications, and the proliferation of 5G-connected IoT devices and data centers. At the same time, evolving architectures are creating new security threat vectors. The opening of the 5G network edge is driven by O-RAN standards, which disaggregates the radio units (RU), front-haul, mid-haul, and distributed units (DU). Virtualization of the 5G network further disaggregates hardware and software and introduces commodity servers with open-source software running in virtual machines (VM’s) or containers from the DU to the core network.

    As a result, these factors have necessitated improvements in 5G security standards that include additional protocols and new security features. But these measures alone, are not enough to secure the 5G network in the cloud-native and quantum computing era. This blog details the growing need for cloud-optimized HSMs (Hardware Security Modules) and their many critical 5G use cases from the device to the core network.

  • March 21, 2023

    New Goals Demonstrate Marvell’s Commitment to ESG

    By Rebecca O'Neill, Global Head of ESG, Marvell

    Marvell recently released its inaugural Environmental, Social and Governance (ESG) Report, detailing the company's goals, strategic approach, and commitment to building a sustainable future. Marvell's approach is based on the areas of greatest impact and opportunity for our company: integrating environmental and social considerations into our product design and responsibly managing the impacts of our supply chain, while focusing on strategic ESG initiatives that are material to our financial performance and long-term value creation.  

    Part of our overarching commitment to address ESG topics involves continuous improvement. That’s why Marvell has set a range of goals that showcase key areas of focus for our business, now and in the future. 

  • March 10, 2023

    Introducing Nova, a 1.6T PAM4 DSP Optimized for High-Performance Fabrics in Next-Generation AI/ML Systems

    By Kevin Koski, Product Marketing Director, Marvell

    Last week, Marvell introduced Nova™, its latest, fourth generation PAM4 DSP for optical modules. It features breakthrough 200G per lambda optical bandwidth, which enables the module ecosystem to bring to market 1.6 Tbps pluggable modules. You can read more about it in the press release and the product brief.

    In this post, I’ll explain why the optical modules enabled by Nova are the optimal solution to high-bandwidth connectivity in artificial intelligence and machine learning systems.

    Let’s begin with a look into the architecture of supercomputers, also known as high-performance computing (HPC).

    Historically, HPC has been realized using large-scale computer clusters interconnected by high-speed, low-latency communications networks to act as a single computer. Such systems are found in national or university laboratories and are used to simulate complex physics and chemistry to aid groundbreaking research in areas such as nuclear fusion, climate modeling and drug discovery. They consume megawatts of power.

    The introduction of graphics processing units (GPUs) has provided a more efficient way to complete specific types of computationally intensive workloads. GPUs allow for the use of massive, multi-core parallel processing, while central processing units (CPUs) execute serial processes within each core. GPUs have both improved HPC performance for scientific research purposes and enabled a machine learning (ML) renaissance of sorts. With these advances, artificial intelligence (AI) is being pursued in earnest.

  • March 02, 2023

    Introducing the 51.2T Teralynx 10, the Industry’s Lowest Latency Programmable Switch

    By Amit Sanyal, Senior Director, Product Marketing, Marvell

    If you’re one of the 100+ million monthly users of ChatGPT—or have dabbled with Google’s Bard or Microsoft’s Bing AI—you’re proof that AI has entered the mainstream consumer market.

    And what’s entered the consumer mass-market will inevitably make its way to the enterprise, an even larger market for AI. There are hundreds of generative AI startups racing to make it so. And those responsible for making these AI tools accessible—cloud data center operators—are investing heavily to keep up with current and anticipated demand.

    Of course, it’s not just the latest AI language models driving the coming infrastructure upgrade cycle. Operators will pay equal attention to improving general purpose cloud infrastructure too, as well as take steps to further automate and simplify operations.

    Teralynx 10

    To help operators meet their scaling and efficiency objectives, today Marvell introduces Teralynx® 10, a 51.2 Tbps programmable 5nm monolithic switch chip designed to address the operator bandwidth explosion while meeting stringent power- and cost-per-bit requirements. It’s intended for leaf and spine applications in next-generation data center networks, as well as AI/ML and high-performance computing (HPC) fabrics.

    A single Teralynx 10 replaces twelve of the 12.8 Tbps generation, the last to see widespread deployment. The resulting savings are impressive: 80% power reduction for equivalent capacity.

  • February 01, 2023

    Wind River and Marvell Collaborate to Expand Virtualized RAN Solutions for CSPs

    By Kim Markle, Director Influencer Relations, Marvell

    Wind River and Marvell have collaborated to create an open, virtualized Radio Access Network (vRAN) solution for communication service providers (CSPs) that offers cloud scalability with the features, performance, and energy efficiency of established 5G networks. The collaboration integrates two complementary, industry-leading technologies—the Marvell® OCTEON® 10 Fusion 5G baseband processor and the Wind River Studio cloud software—to provide the carrier ecosystem a deployment-ready vRAN platform built on technologies that are widely proven in 5G networks and data centers.  

    CSPs aim to leverage established IT infrastructure for enhanced service agility and streamlined DevOps in the cloud-native RAN. Marvell's OCTEON 10 Fusion processor supports these goals with programmability based on open-source, industry-standard interfaces and integration with leading cloud software platforms such as Wind River Studio.

    To ensure open-source distribution of Wind River Studio software, OCTEON 10 Fusion software drivers are being used by StarlingX, an open development and integration project. Marvell’s drivers enable Wind River Studio software to communicate with and control the OCTEON 10 Fusion processor. This facilitates developer access to an optimized vRAN system that offers new options for CSPs and helps to expand the carrier ecosystem of RAN and data center hardware and software suppliers, as well as system integrators.

  • February 28, 2023

    Dell Promotes vRAN Leadership with OCTEON 10 Fusion

    By Johnny Truong, Senior Manager, Public Relations, Marvell

    To address the growing demands of 5G applications (and beyond), networks are not only expected, but required, to offer features, performance, and capacity competitive with traditional RAN while improving energy efficiency and cost-savings.

    Watch this video of Dennis Hoffman, SVP and GM of Dell’s Telecom Systems Business discuss how Dell and Marvell will continue building on its strategic partnership in pursuit of truly open mobile networks and how they’re bringing the power of Layer 1 Acceleration technology to the vRAN architecture with Marvell’s OCTEON® 10 Fusion processor, designed for 5G RAN.

  • February 27, 2023

    Marvell and VMware Collaborate to Optimize the RAN

    By Peter Carson, Senior Director Solutions Marketing, Marvell and Tosin Olopade, Technical Product Line Manager, VMware and Padma Sudarsan, Director of Engineering, RAN Architecture, VMware

    VMware, a pioneer in assisting communication service providers (CSPs) in transforming their networks, is partnering up with Marvell, a leading provider of data infrastructure semiconductor solutions to improve RAN performance and ROI. This collaboration provides solutions that enable CSPs to meet the demands of 5G’s increased capacity and use cases, optimizing the revenue and efficiency of each RAN site.

    RAN sites worldwide are targeted for new technology deployment, where traditional, custom-made equipment is being replaced with servers adapted from data centers. This transformation to virtualized RAN and Open RAN, which replaces hardware with software, is driving the modernization of RAN sites worldwide. This allows CSPs to select servers and software based on their strategic goals, enabling them to offer unique services compared to their competitors. 

    However, 5G RAN workloads, particularly Layer 1 (L1), are far more complex and latency-sensitive than the applications that general purpose CPUs have been designed to address. The load on even the most robust CPUs in the case of 5G RAN virtualization can be demanding. The rapid increase in 5G network speeds, reaching multi-gigabit-per-second, and the management of software-centric RAN Distributed Units (DUs) has resulted in rising energy consumption and cooling demands. This leads to increased costs, such as higher electricity bills, and may compromise CSPs’ plans to monetize their RAN investments. 

  • February 21, 2023

    Marvell and Aviz Networks Collaborate to Drive SONiC Deployment in Cloud and Enterprise Data Centers

    By Kant Deshpande, Director, Product Management, Marvell

    Disaggregation is the future
    Disaggregation—the decoupling of hardware and software—is arguably the future of networking. Disaggregation lets customers select best-of-breed hardware and software, enabling rapid innovation by separating the hardware and software development paths.

    Disaggregation started with server virtualization and is being adapted to storage and networking technology. In networking, disaggregation promises that any networking operating system (NOS) can be integrated with any switch silicon. Open source-standards like ONIE allow a networking switch to load and install any NOS during the boot process.

    SONiC: the Linux of networking OS
    Software for Open Networking in Cloud (SONiC) has been gaining momentum as the preferred open-source cloud-scale network operating system (NOS).

    In fact, Gartner predicts that by 2025, 40% of organizations that operate large data center networks (greater than 200 switches) will run SONiC in a production environment.[i] According to Gartner, due to readily expanding customer interest and a commercial ecosystem, there is a strong possibility SONiC will become analogous to Linux for networking operating systems in next three to six years.

  • February 14, 2023

    The Three Things Next-Generation Data Centers Need from Networking

    By Amit Sanyal, Senior Director, Product Marketing, Marvell

    Data centers are arguably the most important buildings in the world. Virtually everything we do—from ordinary business transactions to keeping in touch with relatives and friends—is accomplished, or at least assisted, by racks of equipment in large, low-slung facilities.

    And whether they know it or not, your family and friends are causing data center operators to spend more money. But it’s for a good cause: it allows your family and friends (and you) to continue their voracious consumption, purchasing and sharing of every kind of content—via the cloud.

    Of course, it’s not only the personal habits of your family and friends that are causing operators to spend. The enterprise is equally responsible. They’re collecting data like never before, storing it in data lakes and applying analytics and machine learning tools—both to improve user experience, via recommendations, for example, and to process and analyze that data for economic gain. This is on top of the relentless, expanding adoption of cloud services.

  • February 10, 2023

    Marvell Data Center Interconnect Solutions Achieve 2023 Lightwave Innovation Honors

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

     

     

     

     

    Marvell has been honored with two 2023 Lightwave Innovation Reviews high scores, validating its leadership in PAM4 DSP solutions for data infrastructure. The two awards reflect the industry’s recognition of Marvell’s recent best-in-class innovations to address the growing bandwidth and interconnect needs of cloud data center networks. An esteemed and experienced panel of third-party judges from the optical communications community recognized Marvell as a high-scoring honoree.

    “On behalf of the Lightwave Innovation Reviews, I would like to congratulate Marvell on their high-scoring honoree status,” said Lightwave Editorial Director, Stephen Hardy. “This competitive program allows Lightwave to celebrate and recognize the most innovative products impacting the optical communications community this year.”

    Marvell was recognized for the Marvell® Alaska® A PAM4 DSP Family for Active Electrical Cables (AECs) and the Marvell® Spica™ Gen 2 800G PAM4 Electro-Optics Platform, both in the Data Center Interconnect Platforms category. Key features of these 2023 Lightwave Innovation Reviews honorees include:

  • February 08, 2023

    Marvell’s Commitment to Inclusion and Diversity

    By Rebecca O'Neill, Global Head of ESG, Marvell

    Marvell is committed to fostering an inclusive, diverse, and engaging workplace to fully leverage the perspectives and contributions of every individual at the company. We strive to create an environment where people feel fulfilled, inspired, and motivated to learn and grow, personally and professionally.

    What Inclusion and Diversity Means to Marvell
    Inclusion means focusing on respect, acceptance, and the ability to appreciate a culture-add approach where we can all bring our full authentic selves to work, every day.

    To us, diversity means valuing differences. We value the unique perspectives and experiences of every employee. It is this uniqueness that every employee brings to the company, which is powerful and provides us with a competitive advantage.

    Our Strategy
    We have developed a strategy focused on four Inclusion & Diversity business outcomes:

    Marvell F22 inclusion and Diversity Strategy

  • February 03, 2023

    OCTEON 10 Recognized as “Best Embedded Processor” in Analysts’ Choice Awards, Powered by TechInsights

    By Johnny Truong, Senior Manager, Public Relations, Marvell

    The Marvell® OCTEON® 10 DPU was awarded the 2022 Analysts’ Choice Awards for “Best Embedded Processor” in TechInsight’s Microprocessor Report

    One of the longest-running award programs of its kind, it salutes the top semiconductor offerings in the categories of data center, PC, smartphone, and embedded processors, as well as processor IP cores and related emerging technologies. Winning products were selected for superior features, performance, power, and cost in the context of the company’s target applications and competition.  

    The OCTEON 10 DPU--the world’s first Arm Neoverse N2-based processor in 5nm--is the latest version of the OCTEON processor family. By accelerating wireless, networking, storage, security and other specialized workloads, OCTEON 10 enables best-in-class features, performance, energy efficiency, and total cost of ownership for carriers, cloud providers, and enterprises.   

    The OCTEON processor family is used by four of the top six wireless infrastructure OEMs, in nine of the top 10 firewall appliances, and by other major networking OEMs. 

    TechInsight Chief Analyst Joseph Byrne said: “Processors for communications infrastructure have long pushed the leading edge for embedded products. Marvell’s feat shows that succeeding in the high-performance-embedded market doesn’t require leveraging smartphone or PC/server technology.” 

    Microprocessor Report subscribers can access commentary on the winners, details on what sets them apart, and other nominees in each category here.  

    To learn more about Marvell’s latest addition to the OCTEON DPU family, visit us at MWC 2023 in Barcelona at booth 2F34 in Hall 2. 

  • February 02, 2023

    A No-Compromise Approach to Open, Cloud-Native 5G RAN

    By Peter Carson, Senior Director Solutions Marketing, Marvell

    The rise of fully open and optimized vRAN platforms based on globally-proven 5G layer one hardware accelerators, led by Marvell, has given Open RAN operators the industry’s first no-compromise vRAN solution. Unlike the so-called “look-aside” general-purpose alternative, the Marvell architecture is host server CPU agnostic and uniquely enables (1) RAN software programmability, based on open source, industry standard interfaces and (2) inline hardware acceleration that delivers feature, performance and power parity as compared to existing 5G networks -- absolutely critical requirements of mobile operators. Listen to what leading operators are saying about inline vRAN accelerators.

  • January 27, 2023

    Supporting Our Communities

    By Rebecca O'Neill, Global Head of ESG, Marvell and Sandy Rodriguez, Sr. Compliance Analyst, Marvell

    At Marvell, we are committed to giving back to the communities where we live and work. Our community engagement focuses on three key pillars:

    • Humanitarian endeavors supporting organizations that combat hunger, poverty and homelessness
    • Investing in innovative K–12 educational programs in science, technology, engineering and math (STEM)
    • Championing community projects or initiatives to enrich the lives of our neighbors

    The company will also match employee donations up to $500 per calendar year when an employee makes a donation to a nonprofit aligned with our philanthropic pillars. In addition, we launched a volunteer time off program, offering employees up to three days or 24 hours of paid time off per year to volunteer for causes they care about and support organizations working in our pillar areas. We aim to have at least 20% of our employees participate in our volunteer time off and employee match programs. Both of these endeavors are offered globally.

  • January 18, 2023

    Network Visibility in Industrial Networks Using Time-Sensitive Networking

    By Zvi Shmilovici Leib, Distinguished Engineer, Marvell

    Industry 4.0 is redefining how industrial networks behave and how they are operated. Industrial networks are mission-critical by nature and have always required timely delivery and deterministic behavior. With Industry 4.0, these networks are becoming artificial intelligence-based, automated and self-healing, as well. As part of this evolution, industrial networks are experiencing the convergence of two previously independent networks: information technology (IT) and operational technology (OT). Time Sensitive Networking (TSN) is facilitating this convergence by enabling the use of Ethernet standards-based deterministic latency to address the needs of both the IT and OT realms.

    However, the transition to TSN brings new challenges and requires fresh solutions for industrial network visibility. In this blog, we will focus on the ways in which visibility tools are evolving to address the needs of both IT managers and those who operate the new time-sensitive networks.

    Why do we need visibility tools in industrial networks? 

    Networks are at the heart of the industry 4.0 revolution, ensuring nonstop industrial automation operation. These industrial networks operate 24/7, frequently in remote locations with minimal human presence. The primary users of the industrial network are not humans but, rather, machines that cannot “open tickets.” And, of course, these machines are even more diverse than their human analogs. Each application and each type of machine can be considered a unique user, with different needs and different network “expectations.”

  • January 04, 2023

    Software-Defined Networking for the Software-Defined Vehicle

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Heinlein, Chief Marketing Officer, Sonatus and Simon Edelhaus, VP SW, Automotive Business Unit, Marvell

    The software-defined vehicle (SDV) is one of the newest and most interesting megatrends in the automotive industry. As we discussed in a previous blog, the reason that this new architectural—and business—model will be successful is the advantages it offers to all stakeholders:

    • The OEMs (car manufacturers) will gain new revenue streams from aftermarket services and new applications;
    • The car owners will easily upgrade their vehicle features and functions; and
    • The mobile operators will profit from increased vehicle data consumption driven by new applications.

    What is a software-defined vehicle? While there is no official definition, the term reflects the change in the way software is being used in vehicle design to enable flexibility and extensibility. To better understand the software-defined vehicle, it helps to first examine the current approach.

    Today’s embedded control units (ECUs) that manage car functions do include software, however, the software in each ECU is often incompatible with and isolated from other modules. When updates are required, the vehicle owner must visit the dealer service center, which inconveniences the owner and is costly for the manufacturer.

  • December 13, 2022

    Marvell Joins SOAFEE and Autoware Foundation to Advance Software-Defined Vehicle Architectures

    By Willard Tu, Associate VP, Product Marketing – Automotive Compute, Marvell

    I’m excited to share that Marvell is now a member of two leading automotive technology organizations: the Scalable Open Architecture for Embedded Edge (SOAFEE) and the Autoware Foundation. Marvell’s participation in these organizations’ initiatives demonstrates its continued focus and investment in the automotive market. The new memberships follow the company’s 2021 announcement of its Brightlane™ automotive portfolio, and reflect Marvell’s expanding automotive silicon initiative.

    SOAFEE, founded by Arm, is an industry-led collaboration defined by automakers, semiconductor suppliers, open source and independent software vendors, and cloud technology leaders. The collaboration intends to deliver a cloud-native architecture enhanced for mixed-criticality automotive applications with corresponding open-source reference implementations to enable commercial and non-commercial offerings.

    As a member of SOAFEE, Marvell will access the SOAFEE architecture standards to help streamline development from cloud to deployment at the vehicle. This will enable faster time to market for the Marvell Brightlane automotive portfolio.

  • December 05, 2022

    Leading Lights Award Recognizes Deneb CDSP Leadership

    By Johnny Truong, Senior Manager, Public Relations, Marvell

    At this weeks’ Leading Lights Awards Ceremony, hosted by Light Reading, Editor-in-Chief Phil Harvey announced that the Marvell® Deneb™ Coherent Digital Signal Processor (CDSP) is the winner of the Most Innovative Service Provider Transport Solution category. This recognition is awarded to the optical systems vendor or optical components vendor providing the most innovative optical transport solution for service provider customers.

    Driving the industry's largest standards-based ecosystem, the Marvell Deneb CDSP enables disaggregation which is critical for carriers to lower their CAPEX and OPEX as they increase network capacity. This recognition underscores Marvell’s success in bringing leading-edge density and performance optimization advantages to carrier networks.

    In its 18th year, the Leading Lights is Light Reading’s flagship awards program which recognizes top companies and executives for their outstanding achievements in next-generation communications technology, applications, services, strategies, and innovations.

    Visit the Light Reading blog for a full list of categories, finalists and winners.

  • November 28, 2022

    A Marvell-ous Hack Indeed – Winning the Hearts of SONiC Users

    By Kishore Atreya, Director of Product Management, Marvell

    Recently the Linux Foundation hosted its annual ONE Summit for open networking, edge projects and solutions. For the first time, this year’s event included a “mini-summit” for SONiC, an open source networking operating system targeted for data center applications that’s been widely adopted by cloud customers. A variety of industry members gave presentations, including Marvell’s very own Vijay Vyas Mohan, who presented on the topic of Extensible Platform Serdes Libraries. In addition, the SONiC mini-summit included a hackathon to motivate users and developers to innovate new ways to solve customer problems. 

    So, what could we hack?

    At Marvell, we believe that SONiC has utility not only for the data center, but to enable solutions that span from edge to cloud. Because it’s a data center NOS, SONiC is not optimized for edge use cases. It requires an expensive bill of materials to run, including a powerful CPU, a minimum of 8 to 16GB DDR, and an SSD. In the data center environment, these HW resources contribute less to the BOM cost than do the optics and switch ASIC. However, for edge use cases with 1G to 10G interfaces, the cost of the processor complex, primarily driven by the NOS, can be a much more significant contributor to overall system cost. For edge disaggregation with SONiC to be viable, the hardware cost needs to be comparable to that of a typical OEM-based solution. Today, that’s not possible.

  • November 17, 2022

    The Right Stuff: A Past and Future History of Automotive Connectivity

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and Mark Davis, Senior Director, Solutions Marketing, Marvell

    In the blog, Back to the Future – Automotive network run at speed of 10Gbps, we discussed the benefits and advantages of zonal architecture and why OEMs are adopting it for their next-generation vehicles. One of the biggest advantages of zonal architecture is its ability to reduce the complexity, cost and weight of the cable harness. In another blog, Ethernet Camera Bridge for Software-Defined Vehicles, we discussed the software-defined vehicle, and how using Ethernet from end-to-end helps to make that vehicle a reality.

    While in the near future most devices in the car will be connected through zonal switches, cameras are the exception. They will continue to connect to processors over point-to-point protocol (P2PP) links using proprietary networking protocols such as low-voltage differential signaling (LVDS), Maxim’s GMSL or TI’s FPD-Link.

  • November 08, 2022

    TSN and Prestera DX1500: A Bridge Across the IT/OT Divide

    By Reza Eltejaein, Director, Product Marketing, Marvell

    Manufacturers, power utilities and other industrial companies stand to gain the most in digital transformation. Manufacturing and construction industries account for 37 percent of total energy used globally*, for instance, more than any other sector. By fine-tuning operations with AI, some manufacturers can reduce carbon emission by up to 20 percent and save millions of dollars in the process.

    Industry, however, remains relatively un-digitized and gaps often exist between operational technology – the robots, furnaces and other equipment on factory floors—and the servers and storage systems that make up a company’s IT footprint. Without that linkage, organizations can’t take advantage of Industrial Internet of Things (IIoT) technologies, also referred to as Industry 4.0. Of the 232.6 million pieces of fixed industrial equipment installed in 2020, only 10 percent were IIoT-enabled.

    Why the gap? IT often hasn’t been good enough. Plants operate on exacting specifications. Engineers and plant managers need a “live” picture of operations with continual updates on temperature, pressure, power consumption and other variables from hundreds, if not thousands, of devices. Dropped, corrupted or mis-transmitted data can lead to unanticipated downtime—a $50 billion year problem—as well as injuries, blackouts, and even explosions.

    To date, getting around these problems has required industrial applications to build around proprietary standards and/or complex component sets. These systems work—and work well—but they are largely cut off from the digital transformation unfolding outside the factory walls.

    The new Prestera® DX1500 switch family is aimed squarely at bridging this divide, with Marvell extending its modern borderless enterprise offering into industrial applications. Based on the IEEE 802.1AS-2020 standard for Time-Sensitive Networking (TSN), Prestera DX1500 combines the performance requirements of industry with the economies of scale and pace of innovation of standards-based Ethernet technology. Additionally, we integrated the CPU and the switch—and in some models the PHY—into a single chip to dramatically reduce power, board space and design complexity.

    Done right, TSN will lower the CapEx and OpEx for industrial technology, open the door to integrating Industry 4.0 practices and simplify the process of bringing new equipment to market.

  • November 03, 2022

    The Race Against Automotive Hackers Is Accelerating

    By Hari Parmar, Senior Principal Automotive System Architect, Marvell

    “In your garage or driveway sits a machine with more lines of code than a modern passenger jet. Today’s cars and trucks, with an internet link, can report the weather, pay for gas, find a parking spot, route around traffic jams and tune in to radio stations from around the world. Soon they’ll speak to one another, alert you to sales as you pass your favorite stores, and one day they’ll even drive themselves.

    While consumers may love the features, hackers may love them even more.”

    The New York Times, March 18, 2021

    Hacking used to be an arcane worry, the concern of a few technical specialists. But with recent cyberattacks on pipelines, hospitals and retail systems, digital attacks have suddenly been thrust into public consciousness, leading many to wonder: are cars at risk, too?

    Not if Marvell can help it. As a leading supplier of automotive silicon, the company has been intensely focused on identifying and securing potential vulnerabilities before they can remotely compromise a vehicle, its driver or passengers.

    Unfortunately, hacking cars isn’t just theoretical – in 2015, researchers on a laptop commandeered a Jeep Cherokee 10 miles away, shutting off power, blasting the radio, turning on the AC and making the windshield wipers go berserk. And today, seven years later, millions more cars – including most new vehicles – are connected to the cloud.

  • October 01, 2022

    Marvell is a Member of the Semiconductor Climate Consortium

    By Rebecca O'Neill, Global Head of ESG, Marvell

    I am delighted to announce that Marvell is a Member of the new Semiconductor Climate Consortium. We have been active participants of the group over the past several months and are happy to share that the Climate Consortium is publicly launching today.

    Why a Consortium? 

    Acknowledging that climate action is collective action, Marvell has joined the Semiconductor Climate Consortium to work collaboratively with other semiconductor companies that have also embarked on a carbon reduction journey, to accelerate climate solutions and drive progressive climate action within our industry value chain. 

    The Consortium is an initiative of SEMI, the industry association serving the global electronics design and manufacturing supply chain, and it brings together all parts of the semiconductor ecosystem, including manufacturers, equipment providers, and fabless solutions providers such as Marvell. Everyone has a role to play in advancing the industry’s progress on addressing climate change. The Consortium believes that by working together, member companies will bring collective knowledge and innovative technologies to do so much more than one company can do alone. 

    The Consortium recognizes the challenge of climate change and works to speed semiconductor industry value chain efforts to reduce greenhouse gas emissions, including through support of the Paris Agreement and related accords driving the 1.5°C pathway.  

  • October 26, 2022

    The Tasting Notes for 64G Fibre Channel

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    While age is just a number and so is new speed for Fibre Channel (FC), the number itself is often irrelevant and it’s the maturity that matters – kind of like a bottle of wine! Today as we make a toast to the data center and pop open (announce) the Marvell® QLogic® 2870 Series 64G Fibre Channel HBAs, take a glass and sip into its maturity to find notes of trust and reliability alongside of operational simplicity, in-depth visibility, and consistent performance.

    Big words on the label? I will let you be the sommelier as you work through your glass and my writings.

    Marvell QLogic 2870 series 64GFC HBAs

  • October 12, 2022

    The Evolution of Cloud Storage and Memory

    By Gary Kotzur, CTO, Storage Products Group, Marvell and Jon Haswell, SVP, Firmware, Marvel

    The nature of storage is changing much more rapidly than it ever has historically. This evolution is being driven by expanding amounts of enterprise data and the inexorable need for greater flexibility and scale to meet ever-higher performance demands.

    If you look back 10 or 20 years, there used to be a one-size-fits-all approach to storage. Today, however, there is the public cloud, the private cloud, and the hybrid cloud, which is a combination of both. All these clouds have different storage and infrastructure requirements. What’s more, the data center infrastructure of every hyperscaler and cloud provider is architecturally different and is moving towards a more composable architecture. All of this is driving the need for highly customized cloud storage solutions as well as demanding the need for a comparable solution in the memory domain.

  • October 05, 2022

    Designing energy efficient chips

    By Rebecca O'Neill, Global Head of ESG, Marvell

    Today is Energy Efficiency Day. Energy, specifically the electricity consumption required to power our chips, is something that is top of mind here at Marvell. Our goal is to reduce power consumption of products with each generation for set capabilities.

    Our products play an essential role in powering data infrastructure spanning cloud and enterprise data centers, 5G carrier infrastructure, automotive vehicles, and industrial and enterprise networking. When we design our products, we focus on innovative features that deliver new capabilities while also improving performance, capacity and security to ultimately improve energy efficiency during product use.

    These innovations help make the world’s data infrastructure more efficient and, by extension, reduce our collective impact on climate change. The use of our products by our customers contributes to Marvell’s Scope 3 greenhouse gas emissions, which is our biggest category of emissions.

  • September 28, 2022

    Marvell Brightlane Technology and OMNIVISION Partnership on Display at AutoSens Brussels

    By Katie Maller, Senior Manager, Public Relations, Marvell

    Building on our leadership in Ethernet camera bridge technology, Marvell is excited to work with OMNIVISION and to have been a part of their automotive demonstrations at the recent AutoSens Brussels event. OMNIVISION, a leading global developer of semiconductor solutions, partnered with Marvell to demonstrate its OX03F10 (image sensor) and OAX4000 (image signal processor) with our industry first multi-gigabit Ethernet camera bridge, the Marvell® Brightlane™ 88QB5224.

    The combined solutions allow camera video that would otherwise be transported via point-to-point protocol to be encapsulated over Ethernet, thereby integrating cameras into the Ethernet-based in-vehicle network. The solutions work with both interior and exterior cameras and are ideal for SVS and other applications in which numerous cameras are utilized and the output of those cameras is used by multiple subsystems or zones.

    “Ethernet is the foundation of the software-defined vehicle. By using the Ethernet camera bridge from our Brightlane automotive portfolio to connect cameras to the zonal Ethernet switch, the cameras are integrated into the end-to-end, in-vehicle network,” said Amir Bar-Niv, vice president of marketing for Marvell’s automotive business unit. “Standard Ethernet features such as security, switching, and synchronization are now available to the camera system, and a simple software update is all that’s required when porting the system from one automobile model to another. Shorter runs to the zonal switches reduce the cable cost and weight, as well.”

    The demonstrations in the OMNIVISION booth were well received at AutoSens Brussels, an annual event that brings together leading engineers and technical experts from across the ADAS and autonomous vehicle supply chain.

    To learn more about Marvell’s Ethernet Camera Bridge technology, also check out this blog.

  • September 26, 2022

    SONiC: It’s Not Just for Switches Anymore

    By Amit Sanyal, Senior Director, Product Marketing, Marvell

    SONiC (Software for Open Networking in the Cloud) has steadily gained momentum as a cloud-scale network operating system (NOS) by offering a community-driven approach to NOS innovation. In fact, 650 Group predicts that revenue for SONiC hardware, controllers and OSs will grow from around US$2 billion today to around US$4.5 billion by 2025. 

    Those using it know that the SONiC open-source framework shortens software development cycles; and SONiC’s Switch Abstraction Interface (SAI) provides ease of porting and a homogeneous edge-to-cloud experience for data center operators. It also speeds time-to-market for OEMs bringing new systems to the market.

    The bottom line: more choice is good when it comes to building disaggregated networking hardware optimized for the cloud. Over recent years, SONiC-using cloud customers have benefited from consistent user experience, unified automation, and software portability across switch platforms, at scale.

    As the utility of SONiC has become evident, other applications are lining up to benefit from this open-source ecosystem.

    A SONiC Buffet: Extending SONiC to Storage

    SONiC capabilities in Marvell’s cloud-optimized switch silicon include high availability (HA) features, RDMA over converged ethernet (RoCE), low latency, and advanced telemetry. All these features are required to run robust storage networks.

    Here’s one use case: EBOF. The capabilities above form the foundation of Marvell’s Ethernet-Bunch-of-Flash (EBOF) storage architecture. The new EBOF architecture addresses the non-storage bottlenecks that constrain the performance of the traditional Just-a-Bunch-of-Flash (JBOF) architecture it replaces-by disaggregating storage from compute.

    EBOF architecture replaces the bottleneck components found in JBOF - CPUs, DRAM and SmartNICs - with an Ethernet switch, and it’s here that SONiC is added to the plate. Marvell has, for the first time, applied SONiC to storage, specifically for services enablement, including the NVMeoFTM (NVM Express over Fabrics) discovery controller, and out-of-band management for EBOF, using Redfish® management. This implementation is in production today on the Ingrasys ES2000 EBOF storage solution. (For more on this topic, check out thisthis, and this.)

    Marvell has now extended SONiC NOS to enable storage services, thus bringing the benefits of disaggregated open networking to the storage domain.

    OK, tasty enough, but what about compute?

    How Would You Like Your Arm Prepared?

    I prefer Arm for my control plane processing, you say. Why can’t I manage those switch-based processors using SONiC, too, you ask? You’re in luck. For the first time, SONiC is the OS for Arm-based, embedded control plane processors, specifically the control plane processors found on Marvell® Prestera® switches. SONiC-enabled Arm processing allows SONiC to run on lower-cost 1G systems, reducing the bill-of-materials, power, and total cost of ownership for both management and access switches.

    In addition to embedded processors, with the OCTEON® family, Marvell offers a smorgasbord of Arm-based processors. These can be paired with Marvell switches to bring the benefits of the Arm ecosystem to networking, including Data Processing Units (DPUs) and SmartNICs.

    By combining SONiC with Arm processors, we’re setting the table for the broad Arm software ecosystem - which will develop applications for SONiC that can benefit both cloud and enterprise customers.

    The Third Course

    So, you’ve made it through the SONiC-enabled switching and on-chip control processing courses but there’s something more you need to round out the meal. Something to reap the full benefit of your SONiC experience. PHY, of course. Whether your taste runs to copper or optical mediums; PAM or coherent modulation, Marvell provides a complete SONiC-enabled portfolio by offering SONiC with our (not baked) Alaska® Ethernet PHYs and optical modules built using Marvell DSPs.  Room for Dessert?

    Finally, by enabling SONiC across the data center and enterprise switch portfolio we’re able to bring operators the enhanced telemetry and visibility capabilities that are so critical to effective service-level validation and troubleshooting. For more information on Marvell telemetry capabilities, check out this short video:

     

    The Drive Home

    Disaggregation has lowered the barrier-to-entry for market participants - unleashing new innovations from myriad hardware and software suppliers. By making use of SONiC, network designers can readily design and build disaggregated data center and enterprise networks.

    For its part, Marvell’s goal is simple: help realize the vision of an open-source standardized network operating system and accelerate its adoption.

  • September 21, 2022

    Doing our Part to Address Climate Change

    By Rebecca O'Neill, Global Head of ESG, Marvell

    Today is Zero Emissions Day, which was started to raise awareness of the need to address climate change by reducing greenhouse gas emissions.

    Here at Marvell, we recognize that climate change represents an unprecedented challenge to our planet, society and economy. That’s why we are enhancing our climate strategy by setting a Science-Based Target (SBT) and putting ourselves on a path to net zero carbon emissions. Our SBT will be aligned with a 1.5°C climate scenario, supporting the goals of the Paris Agreement which is aimed at reducing the worst of climate change.

    Our new ESG Report provides a snapshot of our company’s greenhouse gas emissions:

  • August 05, 2022

    Marvell SSD Controller Named Winner at FMS 2022 Best of Show Awards

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell


    FMS best of show award

    Flash Memory Summit (FMS), the industry’s largest conference featuring data storage and memory technology solutions, presented its 2022 Best of Show Awards at a ceremony held in conjunction with this week’s event. Marvell was named a winner alongside Exascend for the collaboration of Marvell’s edge and client SSD controller with Exascend’s high-performance memory card.

    Honored as the “Most Innovative Flash Memory Consumer Application,” the Exascend Nitro CFexpress card powered by Marvell’s PCIe® Gen 4, 4-NAND channel 88SS1321 SSD controller enables digital storage of ultraHD video and photos in extreme temperature environments where ruggedness, endurance and reliability are critical. The Nitro CFexpress card is unique in controller, hardware and firmware architecture in that it combines Marvell’s 12nm process node, low-power, compact form factor SSD controller with Exascend’s innovative hardware design and Adaptive Thermal Control™ technology.

    The Nitro card is the highest capacity VPG 400 CFexpress card on the market, with up to 1 TB of storage, and is certified VPG400 by the CompactFlash® Association using its stringent Video Performance Guarantee Profile 4 (VPG400) qualification. Marvell’s 88SS1321 controller helps drive the Nitro card’s 1,850 MB/s of sustained read and 1,700 MB/sustained write for ultimate performance.

    “Consumer applications, such as high-definition photography and video capture using professional photography and cinema cameras, require the highest performance from their storage solution. They also require the reliability to address the dynamics of extreme environmental conditions, both indoors and outdoors,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the collaboration of Marvell’s SSD controllers with Exascend’s memory cards, delivering 1,850 MB/s of sustained read and 1,700 MB/s sustained write for ultimate performance addressing the most extreme consumer workloads. Additionally, Exascend’s Adaptive Thermal Control™ technology provides an IP67 certified environmental hardening that is dustproof, water resistant and tackles the issue of overheating and thermal throttling.”

    More information on the 2022 Flash Memory Summit Best of Show Award Winners can be found here.

  • June 17, 2022

    Marvell’s Cora Lam Named to Silicon Valley Business Journal Class of 2022 Women of Influence

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

    Marvell is deeply committed to elevating women in STEM and supporting female engineers and entrepreneurs in their efforts to succeed in the tech industry. That’s why we are so proud to announce that Marvell team member Cora Lam has been named to the Silicon Valley Business Journal Women of Influence Class of 2022 for her successes and commitments in both workplace and community services.

    Throughout the past 21 years, Marvell has been the nourishing ground to foster Cora’s professional development. With Cora’s dedication, creativity, and excellence in execution, she steadily progressed from a junior engineer to her now current position as Senior Principal Engineer in the Central Engineering group.

    For the last five years, Cora also surprisingly discovered her two greatest passions within Marvell - Women in STEM and wellness. She believes you don’t need to be a CEO to promote women in STEM, instead, you just need to be your authentic self, utilizing motivation, passion, and compassion to empower others. To Cora, having genuine compassion for others without any agenda is the source of true wellness.

    Despite Cora’s busy work schedule, Cora is one of the leaders and key volunteers for the Women at Marvell (WAM) initiative since its establishment in 2017. By organizing events such as International Women’s Day celebrations, speaker series, and panel discussions, WAM aims to inspire and foster a culture of diversity, gender equality and inclusion within Marvell.

    Through a WAM meeting in 2018, Cora first heard about TechWomen (TW), an initiative from the U.S. Department of State that annually brings over 100 women Emerging Leaders (ELs) in STEM from 20+ countries in Africa, Central and South Asia, and the Middle East to the San Francisco Bay Area for a five-week intensive professional mentorship and networking program. For Cora, TW was like “love at first sight”, and she became Marvell’s first and only TW mentor in 2018.

  • April 27, 2022

    Optimizing SSDs for Industrial and Edge Applications

    By Pichai Balaji, Director, Product Marketing, Flash BU, Marvel

    Industrial SSDs are specifically designed for high-performance systems where data integrity and reliability are of the utmost importance. Industrial SSDs cover a wide range of applications including industrial data storage, heavy robotics, retail kiosks, medical systems, security surveillance, video monitoring, and gaming, to name a few.

    When most people hear the term “industrial SSD,” they immediately think of a ruggedized, high-temperature SSD in a metal casing. While such drives are part of the industrial class of SSDs, most industrial and edge applications have a wider range of requirements in terms of SSD controller hardware, firmware, SSD form factor, drive capacity, endurance, reliability, and use case/workload.

    For these applications, it is critical that the SSD meets industrial quality standards, and long-term reliability and performance requirements. These SSD devices must be able to withstand industrial grade temperatures, as well as a higher level of shock and vibration. Some applications need these SSDs to operate in ambient temperatures ranging from -40°C to 85°C. In such extreme conditions, data loss is a serious concern.

    Marvell’s 88SS1321/22 SSD controllers are designed to meet the industrial requirements on temperature endurance, longevity, and performance. Marvell’s 88SS1321 device also provides flexibility for the industrial SSD maker to choose the SSD form factor (supports 2.5” / U.2; m.2 2230 to 22110), and  choose to use the SSD with or without DRAM (optional).

    Exascend recently launched an industrial grade PCIe Gen 4 SSD – the PI4 Series. Powered by Marvell’s 88SS1321 PCIe Gen 4 SSD controller, the SSD offers 3500MB/s performance and can operate in an extreme temperature range of -40°C to 85°C. It offers full disk encryption / TCG OPAL 2.0 in M.2 (2280 & 2242), U.2, E1.S and CFexpress form factors for industrial and ADAS storage applications.

    Marvell’s 88SS1321/22 SSD controller hardware is designed to offer SSD firmware the maximum control to optimize SSD level solutions for different workloads in a wide range of industrial and edge applications. The product’s reference design has been validated from standards/spec compliance, as well as from an electrical compatibility perspective. The board design BOM is also cost-optimized for low cost of ownership. More information on these benefits can be found here.

    Additionally, various SKUs within the product offer added flexibility to SSD makers, enabling them to address applications that may require DRAM and a wider range of operating temperatures.

    With the integration of AI/ML, industrial systems have become autonomous and more distributed in recent years. The proliferation of AI-based IoT (AIoT) devices has increased end-to-end system complexity, pushing compute and storage resources to the edge in order to leverage low-latency 5G connectivity and/or Ethernet Time Sensitive Networking (TSN) for real-time, mission-critical data access and processing.

    Innodisk is another industrial SSD maker who has recently launched multiple PCIe Gen 4 industrial-grade SSDs with Marvell’s 88SS1321/22 SSD controllers that can operate with or without DRAM. The Innodisk PCIe 4TE and 4TG-P are the first industrial-oriented PCIe 4.0 SSDs turbocharging 5G and AIoT. The product can work in -40°C and 85°C environments, where some specific applications, including smart streetlights, 5G mmWave, and security inspection cameras, are critical for industrial strength. The PCIe 4TE and 4TG-P support AES-256 encryption and are TCG-OPAL 2.0 compliant.

    Other key features of Marvell’s industrial SSD controllers include:

    • Support for both DRAM and in DRAM-less operation
    • Support for a wide range of form factors, including m.2 110 to 30, CFexpress and BGA SSD
    • SDK firmware to kickstart the development and customization of the SSD for the end-user application/workload
    • Offered in C-temp (0°C-70°C) and I-temp (-40°C-85°C) SKUs

    Marvell’s 88SS1321/22 SSD controllers are designed to allow firmware to be optimized for many different applications. A host of SKUs built on the same architecture allow SSD developers to leverage Marvell’s reference design to develop their own SSD for various form factors, capacity, endurance, and reliability standards including ruggedized, high-temp SSDs with metal casings.

    Learn more about Marvell’s 88SS1321/22 product series of SSD controllers here.

     

     

  • April 18, 2022

    Ethernet Camera Bridge for Software-Defined Vehicles

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

    Automotive Transformation

    Smart Car and Data Center-on-wheels are just some of the terms being used to define the exciting new waves of technology transforming the automotive industry and promising safer, greener self-driving cars and enhanced user experiences. Underpinning it all is a megatrend towards Software-defined Vehicles (SDV). SDV is not just a new automotive technology platform. It also enables a new business model for automotive OEMs. With a software-centric architecture, car makers will have an innovation platform to generate unprecedented streams of revenue from aftermarket services and new applications. For owners, the capability to receive over-the-air software updates for vehicles already on the road – as easily as smartphones are updated – means an automobile whose utility will no longer decline over time and driving experiences that can be continuously improved over time.

    This blog is the first in a series of blogs that will discuss the basic components of a system that will enable the future of SDV.

    Road to SDV is Paved with Ethernet

    A key technology to enable SDVs is a computing platform that is supported by an Ethernet-based In-Vehicle network (IVN). An Ethernet-based IVN provides the ability to reshape the traffic between every system in the car to help meet the requirements of new downloaded applications. To gain the full potential of Ethernet-based IVNs, the nodes within the car will need to “talk” Ethernet. This includes devices such as car sensors and cameras. In this blog, we discuss the characteristics and main components that will drive the creation of this advanced Ethernet-based IVN, which will enable this new era of SDV. 

    But first let’s talk about the promises of this new business model. For example, people might ask, “how many new applications can possibly be created for cars and who will use them?” This is probably the same question that was asked when Apple created the original AppStore, which started with dozens of new apps, and now of course, the rest is history. We can definitely learn from this model. Plus, this is not going to be just an OEM play. Once SDV cars are on the road, we should expect the emergence of new companies that will develop for the OEMs a whole new world of car applications that will be aligned with other megatrends like Smart City, Mobility as a Service (MaaS), Ride-hailing and many others. 

    A New Era of Automotive Innovation

    Let us now fast forward to the years 2025 to 2030 (which in the automotive industry is considered ‘just around the corner.’) New cars that are designed to support higher level of driver assist systems (ADAS) include anywhere between 20 to 30 sensors (camera, radar, lidar and others). Let’s imagine two new potential applications that could utilize these sensors:

    Application 1: “Catch the Car Scratcher” - How many times have we heard of, or even been in, this situation? Someone scratches your car in the parking lot or maliciously scratches your car with a car key. What if the car was able to capture the face of the person or license plate number of the car that caused the damage? Wouldn’t that be a cool feature an OEM could provide to the car owner on demand? If priced right, it most likely could become a popular application. The application could use the accelerometers, and potentially a microphone, to detect the noise of scratching, bumping or hitting the car. Once the car identifies the scratching or bumping, it would activate all of the cameras around the car. The car would then record the video streams into a central storage. This video could later be used by the owner as necessary to recover repair costs through insurance or the courts.

    Application 2: “Break-in Attempt Recording” - In this next application, when the system detects a break-in attempt, all internal and external cameras record the video into central storage and immediately upload it to the cloud. This is done in case the car thief tries to tamper with the storage later. In parallel, the user gets a warning signal or alert by phone so they can watch the video streams or even connect to the sound system in the car and scare the thief with their own voice.

    We will examine these scenarios more comprehensively in a follow up blog, but these are just two simple examples of the many possible high-value automotive apps that an Ethernet-based IVN can enable in the software-defined car of the future.

    Software-Defined Network

    Ethernet network standards comprise a long list of features and solutions that have been developed over the years to address real network needs, including the mitigation of security threats. Ethernet was initially adopted by the automotive industry in 2014 and it has since become the dominant network in the car. Once the car’s processors, sensors, cameras and other devices are connected to each other via Ethernet (Ethernet End-to-End), we can realize the biggest promise of SDV: the capability to reprogram the in-vehicle network and adapt its main characteristics to new advanced applications. This capability is called In-Vehicle Software-Defined Networking, or in short, In-vehicle SDN.

    Figure 1 shows the building blocks for In-Vehicle SDN that enable SDV.Ethernet and SDN as building blocks for SDV

     

    Figure 1 – Ethernet and SDN as building blocks for SDV

    Ethernet features enable four key attributes that are key for SDV: Flexibility, Scalability, Redundancy and Controllability.

    • Flexibility provides for the ability to change data flow in the network and share devices (like cameras and sensors) between domains, processors and other shared resources (e.g., storage).
    • Scalability of both software and hardware is needed to support new applications and features. Software updates to the originally installed processors and ECUs usually require changes in the network’s routing of data and controls. Hardware can be also modified over time in the car, and in many cases, adaptations to the network may be required to support new speeds and Quality of Service (QoS), for the new hardware.
    • Redundancy, not only in mission-critical processors but also in data paths between the devices, safeguards the network. Switching and multi-data paths can also assist load balancing in the backbone of the in-vehicle network.
    • Controllability, diagnostic and real-time debugging of all links in the car provides real-time self-diagnosis and fault management, such as channel quality, link marginality/degradation, and EMC vulnerability, by leveraging of the Ethernet’s Operations, Administration, and Maintenance (OAM) protocol. With advanced, AI/ML based data processing, more effective prediction of network health is possible, enabling higher-level safety goals and significant economic benefits.

    In-vehicle SDN is the mechanism that provide the ability to modify and adapt these attributes in SDV. SDN is a technology that uses application programming interfaces (APIs) to communicate with underlying hardware infrastructure, like switches and bridges, and provisions traffic flow in a network. The In-Vehicle SDN allows the separation of control and data planes and brings network programmability to the realm of advanced data forwarding mechanisms in automotive networks.

    Cameras and the Ethernet Edge

    To realize the full capability of in-vehicle SDN, most devices in the car will need to be connected via Ethernet. In today’s advanced car architectures, the backbone of the high-speed links is all Ethernet. However, camera interfaces are still based on old proprietary point-to-point Low-Voltage Differential Signaling (LVDS) technology. Newer technologies (like MIPI’s A-PHY and ASA) are under development to replace LVDS, but these are still point-to-point solutions. In this blog we refer to all of these solutions as P2PP (Point-to-Point Protocol). In Figure 2, we show an example of a typical zonal car network with the focus on two domains that use the camera sensors: ADAS and Infotainment.Zonal network architecture with point-to-point camera links

     

    Figure 2 – Zonal network architecture with point-to-point camera links

    While most of the ECUs / Sensors / Devices are connected through (and leverage the benefits of) the zonal backbone, cameras are still connected directly (point-to-point) to the processors. Cameras cannot be shared in a simple manner between the two domains (ADAS and IVI), that in many cases are in separate boxes. There is no scalability in this rigid connectivity. Redundancy is also very limited, since the cameras are connected directly to a processor, and any malfunction in this processor might result in lost connection to the cameras.

    One potential “solution” for this is to connect the cameras to the zonal switches via P2PP, as shown in Figure 3. Zonal network architecture with point-to-point camera links to Zonal switch

     

    Figure 3 – Zonal network architecture with point-to-point camera links to Zonal switch

    This proposal solves only a few of the problems mentioned above but comes at a high cost. To support this configuration the system always needs a dedicated Demux chip, as showed in Figure 4, that converts the P2PP back to camera interface. In addition, to support this configuration, the Zonal switches need a dedicated video interface, like MIPI D-PHY. This interface requires 12 pins per camera (4 pairs for data, 1 pair for clock and 1 pair for control (I2C or SPI)). This adds complexity and many dedicated pins which increases system cost. Another option is to use an external Demux-switch (on top of the Zonal switch) to aggregate multiple P2PP lanes, which is expensive.

    Integration of any of these protocols into the Zonal switch is also highly unlikely, since it requires dedicated, non-Ethernet ports on the switch. In addition, no one will consider integration of proprietary or new and non-matured technologies into switches or SoCs.Camera P2PP Bridge in Zonal Architecture

     

    Figure 4 – Camera P2PP Bridge in Zonal Architecture

    Next is controllability, diagnostics and real-time debugging that do not work over the P2PP links in the same simple and standard way they work over Ethernet. This limits the leverage of existing Ethernet-based SW utilities that are used to access, monitor and debug all Ethernet-based ECUs, devices and sensors in the vehicle. 

    Ethernet Camera Bridge

    The right solution for all of these issues is to convert the camera-video to Ethernet – at the edge. A simple bridge device that connects to the camera module and encapsulates the video over Ethernet packets is all it takes, as shown in Figure 5.Ethernet Camera Bridge in Zonal Architecture

     

    Figure 5 – Ethernet Camera Bridge in Zonal Architecture

    Since the in-vehicle Ethernet network is Layer 2 (L2)-based, the encapsulation of camera video over Ethernet requires a simple, hard-coded (meaning no SW) MAC block in the bridge device. Figure 6 shows a network that utilizes such bridge devices.Zonal architecture with Ethernet End-to-End

     

    Figure 6 – Zonal architecture with Ethernet End-to-End

    The biggest advantage of the Ethernet camera bridge is that it leverages the robustness and maturity of the Ethernet standard. For the Ethernet bridge PHY it means a proven technology (2.5G/5G/10GBASE-T1 and soon 25GBASE-T1) with a very strong ecosystem of cables, connectors, and test facilities (compliance, interoperability, EMC, etc.) that have been accepted by the automotive industry for many years.

    But this is only the tip of the iceberg. Once the underlying technology for the camera interface is Ethernet, these links automatically gain access to all the other IEEE Ethernet standards, like:

    • Switching and virtualization - IEEE 802.1
    • Security – authentication and encryption – IEEE 802.1AE MACsec
    • Time-Synchronization over network – IEEE PTP 1588
    • Power over cable – IEEE PoDL 802.3bu
    • Audio/Video Bridging – IEEE 802.1 AVB/TSN
    • Asymmetrical transmission, using Energy Efficient Ethernet protocol – IEEE 802.3az
    • Support for all topologies: Mesh, star, ring, daisy-chain, point-to-point

    These important features for automotive networks are covered in a previous Marvell blog called, “Ethernet Advanced Features for Automotive Applications.”

    The Ethernet End-to-End with Ethernet camera bridges supports all four key attributes (described in Figure 1) that are required for reliable software-defined car operation: Cameras can easily be shared among domains. Software and hardware can be easily modified independently and scaled all the way up to the camera and sensors. No special video interfaces are needed in the zonal switch – the camera Ethernet link is connected to a standard Ethernet port on the switch, and can be routed on multiple paths, for redundancy. This approach offers the full support of controllability, diagnostic and real-time debugging of the camera links using standard Ethernet utilities that are used in the rest of the in-vehicle network.

    So, what’s next? As camera resolutions and refresh rates increase, camera links will need to support future data rates that climb beyond 10Gbps. To support this trend, the IEEE P802.3cy Greater than 10 Gb/s Electrical Automotive Ethernet PHY Task Force is already in the process of defining a standard for 25Gbps automotive PHY. Therefore, we can expect the vehicle backbone as well as Camera Ethernet bridges of up to 25Gbps to be inevitable in the future, and with them, a plethora of even more compelling smart car apps.

    Marvell Product Roadmap for Automotive

    To help support these new initiatives in automotive technology application and design, Marvell announced the industry’s first multi-gig Ethernet camera bridge solution.

    As shown by these announcements, Marvell continues to drive innovation in networking and compute solutions for automotive applications. The Marvell automotive roadmap includes managed Ethernet switches that support the Trusted Boot® feature to enable over-the-air upload of new system configurations, to enable new applications. Marvell custom compute products for automotive are designed in advanced process nodes and leverage Marvell’s IP portfolio of high-performance multi-core processors, end-to-end security and high-speed PHY and SerDes technologies.

    To learn more about how Marvell is committed to enabling smarter, safer and greener vehicles with its innovative, end-to-end portfolio of Brightlane™ automotive solutions, check out: https://www.marvell.com/products/automotive.html.

    The next blogs in this series will discuss some of the characteristics of SDN-on-wheels, central compute in future vehicles, security structure for vehicle-to-cloud connectivity, in-vehicle-network for infotainment and other exciting developments that enable the future of software-defined vehicle.

  • March 24, 2022

    Marvell Bravera SC5 SSD Controller Family Named “Semiconductor Product of the Year” in the 2022 Data Breakthrough Awards

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

    Data Breakthrough, an independent market intelligence organization that recognizes the top companies, technologies and solutions in the global data technology market, today announced the 2022 winners of its Data Breakthrough Awards. Marvell is thrilled to share that its Bravera™ SC5 SSD controller family  was named “Semiconductor Product of the Year” in the Hardware/Components & Infrastructure category.

    Marvell’s Bravera SC5 controllers are the industry’s first PCIe 5.0 SSD controllers, enabling the highest performing data center flash storage solutions. By bringing unprecedented performance, best-in-class efficiency, and leading security features, Bravera SC5 addresses the critical requirements for scalable, containerized storage for optimal cloud infrastructure. Marvell’s Bravera SC5 doubles the performance compared to PCIe 4.0 SSDs, contributing to accelerated workloads and reduced latency, dramatically improving the user experience.

    “Our Bravera SC5 controllers were developed alongside cloud providers, NAND vendors and the larger ecosystem to meet the critical requirements for faster and higher bandwidth cloud storage,” said Thad Omura, vice president of marketing, Flash Business Unit at Marvell. “This award further validates the innovative feature set our solution brings to address the ever-expanding workloads in the cloud. We thank Data Breakthrough for recognizing the vital role that semiconductors play across the digital data industry.” 

    The Data Breakthrough award nominations were evaluated by an independent panel of experts within the larger fields of data science and technology, with the winning products and companies selected based on a variety of criteria, including most innovative and technologically advanced solutions and services.

    More information about the awards can be found here.

  • February 24, 2022

    Marvell showcases its new no-compromise Open RAN solution with ecosystem partners using best of cloud, wireless compute architectures

    By Peter Carson, Senior Director Solutions Marketing, Marvell

    Marvell’s 5G Open RAN architecture leverages its OCTEON Fusion processor and underscores collaborations with Arm and Meta to drive adoption of no-compromise 5G Open RAN solutions

    The wireless industry’s no-compromise 5G Open RAN platform will be on display at Mobile World Congress 2022. The Marvell-designed solution builds on its extensive compute collaboration with Arm and raises expectations about Open RAN capabilities for ecosystem initiatives like the Meta Connectivity Evenstar program, which is aimed at expanding the global adoption of Open RAN. Last year at MWC, Marvell announced it had joined the Evenstar program [read more]. This year, Marvell’s new 5G Open RAN Accelerator will be on display at the Arm booth at MWC 2022. The OCTEON Fusion processor, which integrates 5G in-line acceleration and Arm Neoverse CPUs, is the foundation for Marvell’s Open RAN DU reference design.

    5G is going mainstream with the rapid rollout of next generation networks by every major operator worldwide. The ability of 5G to reliably provide high bandwidth and extremely low latency connectivity is powering applications like metaverse, autonomous driving, industrial IoT, private networks, and many more. 5G is a massive undertaking that is set to transform entire industries and serve the world’s diverse connectivity needs for years to come. But the wireless networks at the center of this revolution are, themselves, undergoing a major transformation – not just in feeds and speeds, but in architecture. More specifically, significant portions of the 5G radio access network (RAN) are moving into the cloud.

  • February 24, 2022

    No-Compromise 5G Open RAN: Compute Architecture

    By Peter Carson, Senior Director Solutions Marketing, Marvell

    Introduction 

    5G networks are evolving to a cloud-native architecture with Open RAN at the center. This explainer series is aimed at de-mystifying the challenges and complexity in scaling these emerging open and virtualized radio access networks. Let’s start with the compute architecture.

    The Problem 

    Open RAN systems based on legacy compute architectures utilize an excessively high number of CPU cores and energy to support 5G Layer 1 (L1) and other data-centric processing, like security, networking and storage virtualization. As illustrated in the diagram below, this leaves very few host compute resources available for the tasks the server was originally designed to support. These systems typically offload a small subset of 5G L1 functions, such as forward error correction (FEC), from the host to an external FPGA-based accelerator but execute the processing offline. This kind of look-aside (offline) processing of time-critical L1 functions outside the data path adds latency that degrades system performance.Limitations of Open RAN systems based on general purpose processors

    Image:  Limitations of Open RAN systems based on general purpose processors

  • February 08, 2022

    Next Evolution for Storage Networking: Self-driving SANs

    By Todd Owens, Field Marketing Director, Marvell and Jacqueline Nguyen, Marvell Field Marketing Manager

    Storage area network (SAN) administrators know they play a pivotal role in ensuring mission-critical workloads stay up and running. The workloads and applications that run on the infrastructure they manage are key to overall business success for the company.

    Like any infrastructure, issues do arise from time to time, and the ability to identify transient links or address SAN congestion quickly and efficiently is paramount. Today, SAN administrators typically rely on proprietary tools and software from the Fibre Channel (FC) switch vendors to monitor the SAN traffic. When SAN performance issues arise, they rely on their years of experience to troubleshoot the issues.

    What creates congestion in a SAN anyway?

    Refresh cycles for servers and storage are typically shorter and more frequent than that of SAN infrastructure. This results in servers and storage arrays that run at different speeds being connected to the SAN. Legacy servers and storage arrays may connect to the SAN at 16GFC bandwidth while newer servers and storage are connected at 32GFC.

    Fibre Channel SANs use buffer credits to manage the prioritization of the traffic flow in the SAN. When a slower device intermixes with faster devices on the SAN, there can be situations where response times to buffer credit requests slow down, causing what is called “Slow Drain” congestion. This is a well-known issue in FC SANs that can be time consuming to troubleshoot and, with newer FC-NVMe arrays, this problem can be magnified. But these days are soon coming to an end with the introduction of what we can refer to as the self-driving SAN.

  • December 07, 2021

    Optical Technologies for 5G Access Networks

    By Matt Bolig, Director, Product Marketing, Networking Interconnect, Marvell

    There’s been a lot written about 5G wireless networks in recent years.  It’s easy to see why; 5G technology supports game-changing applications like autonomous driving and smart city infrastructure.  Infrastructure investment in bringing this new reality to fruition will take many years and 100’s of billions of dollars globally, as figure 1 below illustrates.

    Figure 1: Cumulative Global 5G RAN Capex in $B (source: Dell’Oro, July 2021)

    When considering where capital is invested in 5G, one underappreciated aspect is just how much wired infrastructure is required to move massive amounts of data through these wireless networks. 

  • December 06, 2021

    Marvell and Ingrasys Collaborate to Power Ceph Cluster with EBOF in Data Centers

    By Khurram Malik, Senior Manager, Technical Marketing, Marvell

    A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
    Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.

    Power Ceph Cluster with EBOF in Data Centers image 1

    Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.

  • November 17, 2021

    Still the One: Why Fibre Channel Will Remain the Gold Standard for Storage Connectivity

    By Todd Owens, Field Marketing Director, Marvell

    For the past two decades, Fibre Channel has been the gold standard protocol in Storage Area Networking (SAN) and has been a mainstay in the data center for mission-critical workloads, providing high-availability connectivity between servers, storage arrays and backup devices. If you’re new to this market, you may have wondered if the technology’s origin has some kind of British backstory. Actually, the spelling of “Fibre” simply reflects the fact that the protocol supports not only optical fiber but also copper cabling; though the latter is for much shorter distances.

    During this same period, servers matured into multicore, high-performance machines with significant amounts of virtualization. Storage arrays have moved away from rotating disks to flash and NVMe storage devices that deliver higher performance at much lower latencies. New storage solutions based on hyperconverged infrastructure have come to market to allow applications to move out of the data center and closer to the edge of the network. Ethernet networks have gone from 10Mbps to 100Gbps and beyond. Given these changes, one would assume that Fibre Channel’s best days are in the past.

    The reality is that Fibre Channel technology remains the gold standard for server to storage connectivity because it has not stood still and continues to evolve to meet the demands of today’s most advanced compute and storage environments. There are several reasons Fibre Channel is still favored over other protocols like Ethernet or InfiniBand for server to storage connectivity.

  • November 09, 2021

    Network Visibility of 5G Radio Access Networks, Part 2

    By Gidi Navon, Senior Principal Architect, Marvell

    In part one of this blog, we discussed the ways the Radio Access Network (RAN) is dramatically changing with the introduction of 5G networks and the growing importance of network visibility for mobile network operators. In part two of this blog, we’ll delve into resource monitoring and Open RAN monitoring, and further explain how Marvell’s Prestera® switches equipped with TrackIQ visibility tools can ensure the smooth operation of the network for operators.

    Resource monitoring

    Monitoring latency is a critical way to identify problems in the network that result in latency increase. However, if measured latency is high, it is already too late, as the radio networks have already started to degrade. The fronthaul network, in particular, is sensitive to even a small increase in latency. Therefore, mobile operators need to ensure the fronthaul segment is below the point of congestion thus achieving extremely low latencies.

    Visibility tools for Radio Access Networks need to measure the utilization of ports, making sure links never get congested. More precisely, they need to make sure the rate of the high priority queues carrying the latency sensitive traffic (such as eCPRI user plane data) is well below the allocated resources for such a traffic class.

    A common mistake is measuring rates on long intervals. Imagine a traffic scenario over a 100GbE link, as shown in Figure 1, with quiet intervals and busy intervals. Checking the rate over long intervals of seconds will only reveal the average port utilization of 25%, giving the false impression that the network has high margins, without noticing the peak rate. The peak rate, which is close to 100%, can easily lead to egress queue congestion, resulting in buffer buildup and higher latencies.

  • October 20, 2021

    Low Power DSP-Based Transceivers for Data Center Optical Fiber Communications

    By Radha Nagarajan, SVP and CTO, Optical and Copper Connectivity Business Group

    As the volume of global data continues to grow exponentially, data center operators often confront a frustrating challenge: how to process a rising tsunami of terabytes within the limits of their facility’s electrical power supply – a constraint imposed by the physical capacity of the cables that bring electric power from the grid into their data center.

    Fortunately, recent innovations in optical transmission technology – specifically, in the design of optical transceivers – have yielded tremendous gains in energy efficiency, which frees up electric power for more valuable computational work.

    Recently, at the invitation of the Institute of Electrical and Electronics Engineers, my Marvell  colleagues Ilya Lyubomirsky, Oscar Agazzi and I published a paper detailing these technological breakthroughs, titled Low Power DSP-based Transceivers for Data Center Optical Fiber Communications.

  • October 18, 2021

    Network Visibility of 5G Radio Access Networks, Part 1

    By Gidi Navon, Senior Principal Architect, Marvell

    The Radio Access Network (RAN) is dramatically changing with the introduction of 5G networks and this, in turn, is driving home the importance of network visibility. Visibility tools are essential for mobile network operators to guarantee the smooth operation of the network and for providing mission-critical applications to their customers.

    In this blog, we will demonstrate how Marvell’s Prestera® switches equipped with TrackIQ visibility tools are evolving to address the unique needs of such networks.

    The changing RAN

    The RAN is the portion of a mobile system that spans from the cell tower to the mobile core network. Until recently, it was built from vendor-developed interfaces like CPRI (Common Public Radio Interface) and typically delivered as an end-to-end system by one RAN vendor in each contiguous geographic area.

    Lately, with the introduction of 5G services, the RAN is undergoing several changes as shown in Figure 1 below:

  • October 11, 2021

    Trends Driving Innovations in Next-Generation Retail Networking

    By Amit Thakkar, Senior Director, Product Management, Marvell

    The retail segment of the global economy has been one of the hardest hit by the Covid-19 pandemic. Lockdowns shuttered stores for extended periods, while social distancing measures significantly impacted foot traffic in these spaces. Now, as consumer demand has shifted rapidly from physical to virtual stores, the sector is looking to reinvent itself and apply lessons learned from the pandemic. One important piece of knowledge that has surfaced across the retail industry: Investing in critical data infrastructure is a must in order to rapidly accommodate changes in consumption patterns.

    Consumers have become much more conscious of the digital experience and, as such, prefer a seamless transition in shopping experiences across both virtual and brick-and-mortar stores. Retailers are revisiting investment in network infrastructure to ensure that the network is “future-proofed” to withstand consumer demand swings. It will be critical to offer new customer-focused, personalized experiences such as cashier-less stores and smart shopping in a manner that is secure, resilient, and high performance. Infrastructure companies will need to be able to bring a complete set of technology options to meet the digital transformation needs of the modern distributed enterprise.

    Highlighted below are five emerging technology trends in enterprise networking that are driving innovations in the retail industry to build the modern store experience.

  • October 04, 2021

    Marvell and Los Alamos National Laboratory Demonstrate High-Bandwidth Capability for HPC Storage Workloads in the Data Center with Ethernet-Bunch-Of-Flash (EBOF) Platform

    By Khurram Malik, Senior Manager, Technical Marketing, Marvell

    As data growth continues at a tremendously rapid pace, data centers have a strong demand for scalable, flexible, and high bandwidth utilization of storage solutions. Data centers need an efficient infrastructure to meet the growing requirements of next-generation high performance computing (HPC), machine learning (ML)/artificial intelligence (AI), composable disaggregated infrastructure (CDI), and storage expansion shelf applications which necessitate scalable, high performance, and cost-efficient technologies. Hyperscalers and storage OEMs tend to scale system-level performance linearly, driven by the number of NVMe SSDs that plug into the system. However, current NVMe-oF storage target Just-A-Bunch-Of-Flash (JBOF) architecture connects fast performance NVMe SSDs behind the JBOF components, causing system-level performance bottlenecks due to CPU, DRAM, PCIe switch and smartNIC bandwidth. In addition, JBOF architecture requires a fixed ratio of CPU and SSDs which results in underutilized resources. Another challenge with JBOF architecture is the scalability of CPU, DRAM, and smartNIC devices does not match the total bandwidth of corresponding NVMe SSDs in the system due to the overall system cost overhead and thus, impacts system-level performance.

    Marvell introduced its industry-first NVMe-oF to NVMe SSD converter controller, the 88SN2400, as a data center storage solution application. It enables the industry to introduce EBOF storage architecture which provides an innovative approach to address JBOF architecture challenges, and truly disaggregate storage from the compute. EBOF architecture replaces JBOF bottleneck components like CPUs, DRAM and smartNICs with Ethernet switch and terminates NVMe-oF either on the bridge or Ethernet SSD. Marvell is enabling NAND vendors to offer Ethernet SSD products. EBOF architecture allows scalability, flexibility, and full utilization of PCIe NVMe drives.

  • October 03, 2021

    Unleashing 5G Network Performance with Next Generation Ethernet

    By Alik Fishman, Director of Product Management, Marvell

    Blink your eyes. That’s how fast data will travel from your future 5G-enabled device, over the network to a server and back. Like Formula 1 racing cars needing special tracks for optimal performance, 5G requires agile networking transport infrastructure to unleash its full potential. The 5G radio access network (RAN) requires not only base stations with higher throughputs and soaring speeds but also an advanced transport network, capable of securely delivering fast response times to mobile end points, whatever those might be: phones, cars or IoT devices. Radio site densification and Massive Machine-type Communication (mMTC) technology are rapidly scaling the mobile network to support billions of end devices1, amplifying the key role of network transport to enable instant and reliable connectivity.

    With Ethernet being adopted as the most efficient transport technology, carrier routers and switches are tasked to support a variety of use cases over shared infrastructure, driving the growth in Ethernet gear installations. In traditional cellular networks, baseband and radio resources were co-located and dedicated at each cell site. This created significant challenges to support growth and shifts in traffic patterns with available capacity. With the emergence of more flexible centralized architectures such as C-RAN, baseband processing resources are pooled in base station hubs called central units (CUs) and distributed units (DUs) and dynamically shared with remote radio units (RUs). This creates even larger concentrations of traffic to be moved to and from these hubs over the network transport.

  • September 14, 2021

    Why the 100G Optical Module Transformation is Full Steam Ahead

    By Rohan Gandhi, Product Marketing Manager, Optical and Copper Connectivity

    When the London Underground opened its first line in 1863, a group of doubtful dignitaries boarded a lurching, smoke-belching train for history’s inaugural subway ride. The next day, thirty thousand curious Londoners flooded the nascent system, and within a year, more than nine million had embraced its use. Nearly 160 years later, that original tunnel is still in daily use, joining 250 miles of track that carry more than 1.3 billion passengers annually.

    What were the keys to such extraordinary growth? Not just popular demand for more tunnels, but also better use of accumulated infrastructure – optimized through newer trains, enhanced signaling, greater energy efficiency, and smarter scheduling. In a sense, the Tube’s transformation mirrors the fundamental challenge now confronting modern data centers: how to make better use of existing infrastructure to handle today’s exponential growth of data.

    PAM4 DSP Technology is Fast and Flexible

    To keep up with the surging data demands of new video and AI workloads, modern data centers can’t simply add more and bigger pipes – at least not cost-effectively. They need PAM4 based optical module solutions to effectively and efficiently move more bandwidth at higher speeds. In addition, they need to be able to update the optical modules via software, optimizing existing infrastructure at an affordable price.

  • September 07, 2021

    Got Chemistry? Windows Server 2022 and Marvell QLogic Fibre Channel

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    Recently, Microsoft® announced the general availability of Windows® Server 2022, a release that us geeks refer to with its codename “Iron.” At Marvell we have long worked to integrate our server connectivity solutions into Windows and like to think of the Marvell® QLogic® Fibre Channel (FC) technology as that tiny bit of “carbon” that turns “iron” to “steel” – strong yet flexible and designed to make business applications shine. Let’s dive into the bits and bytes of how the combination of Windows Server 2022 and Marvell QLogic FC makes for great chemistry.

    If you ask hybrid cloud IT managers and architects to identify the three things they need more of from their IT infrastructure, the responses would resoundingly focus on the following: improved security, scalability that does not break the bank, and an easy way to manage the hardest things. Based on the input from our customers on the challenges that they face in today’s demanding and evolving IT environments, Marvell has continued to enhance its QLogic FC technology to address these critical requirements. Marvell QLogic FC technology builds on the new features of Microsoft Windows Server 2022 and further extends the security, scalability and management capabilities to offer server connectivity solutions that are designed specifically with our customers’ needs in mind.

  • August 16, 2021

    Highly Integrated Silicon Photonics Light Engines in High-Speed Data Transport

    By Radha Nagarajan, SVP and CTO, Optical and Copper Connectivity Business Group

    The exponential increase in bandwidth demand will drive continuous innovation in, and deployment of, data movement interconnects for Cloud and Telecom providers.  As a result, highly integrated silicon photonics platform solutions are expected to become a key enabling technology for the cloud and telecom market over the next decade.

    What Does Highly Integrated Silicon Photonics Platform Mean for the Infrastructure Business?

    As speed continues to go up, optical will replace copper as the primary conduit of the digital bits inside Cloud data centers.  Marvell is investing heavily in silicon photonics to complement our high-speed CMOS technologies in data center interconnects to accelerate this transition.

    • Silicon photonic solutions have been successfully deployed inside Cloud data centers for 100G to compete with traditional “chip-and-wire” discrete solutions.  We expect silicon photonics will gain market share as the Cloud providers transition to the next bit rate of 400G.
    • Integrated silicon photonics platform solutions have intrinsic advantage over conventional packaging solutions at ever increasing baud rates.
    • Hyperscale data centers have limited power and cooling available for severs and interconnects. Integration technology is attractive where space and power savings are critical.
    • Integrating optical components on a silicon interposer can leverage the cost benefits of large-scale automated electronics assembly eco-system versus the traditional “chip-and-wire” optical industry.
  • June 21, 2021

    Marvell Shares 5G, Cloud and Data Infrastructure Insights at The Six Five Summit

    By Marvell, PR Team

    Last week, Moor Insights and Futurum Research kicked off The Six Five Summit, a virtual, on demand event focused on the latest developments and trends in digital transformation. Marvell was thrilled to join alongside the world’s leading technology companies to share insights on strategy, innovation and where the industry is heading.

    Marvell’s Raghib Hussain, President, Products and Technologies participated in the event’s Cloud and Infrastructure Day to discuss the evolution of the cloud data center including the shift from application-specific to data-centric compute. In his presentation, “Accelerating the Cloud Data Center Evolution,” Raghib focuses on how scalability, performance and efficiency are driving technology infrastructure requirements and why optimized and customized silicon solutions are the future of the cloud.

  • June 02, 2021

    Breaking Digital Logjams with NVMe

    By Ian Sagan, Marvell Field Applications Engineer and Jacqueline Nguyen, Marvell Field Marketing Manager and Nick De Maria, Marvell Field Applications Engineer

    Have you ever been stuck in bumper-to-bumper traffic? Frustrated by long checkout lines at the grocery store? Trapped at the back of a crowded plane while late for a connecting flight?

    Such bottlenecks waste time, energy and money. And while today’s digital logjams might seem invisible or abstract by comparison, they are just as costly, multiplied by zettabytes of data struggling through billions of devices – a staggering volume of data that is only continuing to grow.

    Fortunately, emerging Non-Volatile Memory Express technology (NVMe) can clear many of these digital logjams almost instantaneously, empowering system administrators to deliver quantum leaps in efficiency, resulting in lower latency and better performance. To the end user this means avoiding the dreaded spinning icon and getting an immediate response.

  • April 29, 2021

    Back to the Future – Automotive network run at speed of 10Gbps

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

    In the classic 1980s “Back to the Future” movie trilogy, Doc Brown – inventor of the DeLorean time machine – declares that "your future is whatever you make it, so make it a good one.” At Marvell, engineers are doing just that by accelerating automotive Ethernet capabilities: Earlier this week, Marvell announced the latest addition to its automotive products portfolio – the 88Q4346 802.3ch-based multi-gig automotive Ethernet PHY.

    This technology addresses three emerging automotive trends requiring multi-gig Ethernet speeds, including:

    1. The increasing integration of high-resolution cameras and sensors
    2. Growing utilization of powerful 5G networks
    3. The rise of Zonal Architecture in car design
  • April 01, 2021

    Marvell Enables O-RAN to Help 5G Fulfill its True Potential

    By Marvell, PR Team

    At the most recent FierceWireless 5G Blitz Week, some of the world’s leading 5G innovators met via webinar to discuss the potential of O-RAN and challenges of the ongoing 5G rollout. In a keynote, EVP and General Manager of Marvell’s Processors Business Group Raj Singh explored the accelerating shift to O-RAN, which is an emerging open-source architecture for Radio Access Networks that enables customers to create better 5G applications by mixing and matching RAN technology from different vendors.

    O-RAN architectures are compelling because they increase competition among vendors, reduce costs, and offer customers greater flexibility to combine RAN elements according to their application’s specific use cases. However, in addition to their obvious benefits, O-RAN solutions also raise operator concerns including potential challenges with integration, legacy support, interoperability and security – issues that Marvell and other companies in the Open RAN Policy Coalition are addressing through shared standards, proven solutions and innovative approaches.

  • January 29, 2021

    Full Steam Ahead! Marvell Ethernet Device Bridge Receives Avnu Certification

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Bergen, Sr. Product Marketing Manager, Automotive Business Unit, Marvell

    In the early decades of American railroad construction, competing companies laid their tracks at different widths. Such inconsistent standards drove inefficiencies, preventing the easy exchange of rolling stock from one railroad to the next, and impeding the infrastructure from coalescing into a unified national network. Only in the 1860s, when a national standard emerged – 4 feet, 8-1/2 inches – did railroads begin delivering their true, networked potential.

    Some one hundred-and-sixty years later, as Marvell and its competitors race to reinvent the world’s transportation networks, universal design standards are more important than ever. Recently, Marvell’s 88Q5050 Ethernet Device Bridge became the first of its type in the automotive industry to receive Avnu certification, meeting exacting new technical standards that facilitate the exchange of information between diverse in-car networks, which enable today’s data-dependent vehicles to operate smoothly, safely and reliably.

  • January 14, 2021

    What’s Next in System Integration and Packaging? New Approaches to Networking and Cloud Data Center Chip Design

    By Wolfgang Sauter, Customer Solutions Architect - Packaging, Marvel

    The continued evolution of 5G wireless infrastructure and high-performance networking is driving the semiconductor industry to unprecedented technological innovations, signaling the end of traditional scaling on Single-Chip Module (SCM) packaging. With the move to 5nm process technology and beyond, 50T Switches, 112G SerDes and other silicon design thresholds, it seems that we may have finally met the end of the road for Moore’s Law.1 The remarkable and stringent requirements coming down the pipe for next-generation wireless, compute and networking products have all created the need for more innovative approaches. So what comes next to keep up with these challenges? Novel partitioning concepts and integration at the package level are becoming game-changing strategies to address the many challenges facing these application spaces.

    During the past two years, leaders in the industry have started to embrace these new approaches to modular design, partitioning and package integration. In this paper, we will look at what is driving the main application spaces and how packaging plays into next-generation system  architectures, especially as it relates to networking and cloud data center chip design.

  • January 11, 2021

    Industry’s First NVMe Boot Device for HPE® ProLiant® and HPE Apollo Servers Delivers Simple, Secure and Reliable Boot Solution based on Innovative Technology from Marvell

    By Todd Owens, Field Marketing Director, Marvell

    Today, operating systems (OSs) like VMware recommend that OS data be kept completely separated from user data using non-network RAID storage. This is a best practice for any virtualized operating system including VMware, Microsoft Azure Stack HCI (Storage Spaces Direct) and Linux. Thanks to innovative flash memory technology from Marvell, a new secure, reliable and easy-to-use OS boot solution is now available for Hewlett Packard Enterprise (HPE) servers.

    While there are 32GB micro-SD or USB boot device options available today with VMware requiring as much as 128GB of storage for the OS and Microsoft Storage Spaces Direct needing 200GB — these solutions simply don’t have the storage capacity needed. Using hardware RAID controllers and disk drives in the server bays is another option. However, this adds significant cost and complexity to a server configuration just to meet the OS requirement. The proper solution to address separating the OS from user data is the HPE NS204i-p NVME OS Boot Device.

  • November 01, 2020

    Superior Performance in the Borderless Enterprise – White Paper

    By Gidi Navon, Senior Principal Architect, Marvell



    The current environment and an expected “new normal” are driving the transition to a borderless enterprise that must support increasing performance requirements and evolving business models. The infrastructure is seeing growth in the number of endpoints (including IoT) and escalating demand for data such as high-definition content. Ultimately, wired and wireless networks are being stretched as data-intensive applications and cloud migrations continue to rise.

  • November 12, 2020

    Flash Memory Summit Names Marvell a 2020 Best of Show Award Winner

    By Lindsey Moore, Marketing Coordinator, Marvell

    Marvell wins FMS Award for Most Innovative Technology

    Flash Memory Summit, the industry's largest trade show dedicated to flash memory and solid-state storage technology, presented its 2020 Best of Show Awards yesterday in a virtual ceremony. Marvell, alongside Hewlett Packard Enterprise (HPE), was named a winner for "Most Innovative Flash Memory Technology" in the controller/system category for the Marvell NVMe RAID accelerator in the HPE OS Boot Device.

    Last month, Marvell introduced the industry’s first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. HPE is the first of Marvell's partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems. The solution lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance.

  • October 30, 2020

    Adhir Mattu Named Global Winner of the 2020 Bay Area CIO of the Year ORBIE Awards

    By Lindsey Moore, Marketing Coordinator, Marvell

    The BayAreaCIO recognized chief information officers in eight key categories – Leadership, Super Global, Global, Large Enterprise, Enterprise, Large Corporate, Corporate, and Nonprofit/Public Sector.

    “The BayAreaCIO ORBIE winners demonstrate the value great leadership creates. Especially in these uncertain times, CIOs are leading in unprecedented ways and enabling the largest work-from-home experiment in history,” according to Lourdes Gipson, Executive Director of BayAreaCIO. “The ORBIE Awards are meaningful because they are judged by peers - CIOs who understand how difficult this job is and why great leadership matters.”

  • October 30, 2020

    Matt Murphy Talks Inphi Acquisition on CNBC’s Squawk Alley

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Yesterday Marvell announced its intent to join forces with Inphi, a leader in high-speed data movement. A premier company in the semiconductor industry, and one of the highest regarded companies in our space, Inphi’s highly complementary portfolio accelerates growth and leadership in cloud and 5G. The combination of the two companies is expected to create a U.S. semiconductor powerhouse with an enterprise value of approximately $40 billion.

    Combined with explosive Internet traffic growth and the rollout of new ultra-fast 5G wireless networks, the importance of Inphi’s high-speed data interconnect solutions will only accelerate. The merged company will be uniquely positioned to serve the data-driven world, addressing high growth, attractive end markets – cloud data center and 5G.

    President and CEO of Marvell, Matt Murphy had the opportunity to discuss the deal with CNBC’s Squawk Alley team after the news broke yesterday morning. Catch a replay of that video broadcast here to learn more.

    Press Release: (Click Here)


    Analyst commentary from Patrick Moorhead of Moor Insights & Strategy and Daniel Newman of Futurum Research: (Click Here)

  • October 27, 2020

    Unleashing a Better Gaming Experience with NVMe RAID

    By Shahar Noy, Senior Director, Product Marketing

    You are an avid gamer. You spend countless hours in forums to decide between the ASUS TUF components and researching Radeon RX 500 or GeForce RTX 20, to ensure games would show at their best on your hard-earned PC gaming rig. You made your selection and can’t stop bragging about your system’s ray tracing capabilities and how realistic is the “Forza Motorsport 7” view from your McLaren F1 GT cockpit when you drive through the legendary Le Mans circuit at dusk. You are very proud of your machine and the year 2020 is turning out to be good: Microsoft finally launched the gorgeous looking “Flight Simulator 2020,” and CD Projekt just announced that the beloved and award-winning “The Witcher 3” is about to get an upgrade to take advantage of the myriad of hardware updates available to serious gamers like you. You have your dream system in hand and life can’t be better.

  • October 20, 2020

    Network Visibility in the Borderless Enterprise – White Paper

    By Gidi Navon, Senior Principal Architect, Marvell

    network visibility

    Enterprise networks are changing, adapting and expanding to become a borderless enterprise. Visibility tools must evolve to meet the new requirements of an enterprise that now extends beyond the traditional campus — across multi-cloud environments to the edge.

  • October 09, 2020

    Matt Murphy Joins CNBC’s Jim Cramer for a Post Investor Day Discussion on Mad Money

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Following the company’s 2020 Investor Day, Marvell President and CEO, Matt Murphy, joined Jim Cramer on CNBC’s Mad Money to discuss yesterday’s event highlights. Calling out significant growth opportunities across Marvell’s key market segments – including #5G, #DataCenter #Cloud and #Automotive – Murphy noted that adoption of both 5G and Cloud remain in the early innings and that Marvell is well positioned to see continued benefits from these long-term growth markets.

    As working from home accelerates digital transformation, Marvell is building the next generation data infrastructure semiconductor technology that will power the world’s progress.



    Watch more videos on the Marvell YouTube Channel & Subscribe:  (Click Here)

  • October 07, 2020

    Ethernet Advanced Features for Automotive Applications

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell

    Ethernet standards comprise a long list of features and solutions that have been developed over the years to resolve real network needs as well as resolve security threats. Now, developers of Ethernet In-Vehicle-Networks (IVN) can easily balance between functionality and cost by choosing the specific features they would like to have in their car’s network.

    The roots of Ethernet technology began in 1973, when Bob Metcalfe, a researcher at Xerox Research Center (who later founded 3COM), wrote a memo entitled “Alto Ethernet,” which described how to connect computers over short-distance copper cable. With the explosion of PC-based Local Area Networks (LAN) in businesses and corporations in the 1980s, the growth of client/server LAN architectures continued, and Ethernet started to become the connectivity technology of choice for these networks. However, the Ethernet advancement that made it the most successful networking technology ever was when standardization efforts began for it under the IEEE 802.3 group.

  • August 31, 2020

    Arm processors in the Data Center

    By Raghib Hussain, President, Products and Technologies

    Last week, Marvell announced a change in our strategy for ThunderX, our Arm-based server-class processor product line. I’d like to take the opportunity to put some more context around that announcement, and our future plans in the data center market.

    ThunderX is a product line that we started at Cavium, prior to our merger with Marvell in 2018. At Cavium, we had built many generations of successful processors for infrastructure applications, including our Nitrox security processor and OCTEON infrastructure processor. These processors have been deployed in the world’s most demanding data-plane applications such as firewalls, routers, SSL-acceleration, cellular base stations, and Smart NICs. Today, OCTEON is the most scalable and widely deployed multicore processor in the market.

  • August 28, 2020

    Matt Murphy Talks Marvell’s Market Traction on CNBC’s Squawk Alley

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Marvell President and CEO, Matt Murphy, discussed Marvell’s second quarter Earnings beat this morning with the CNBC Squawk Alley team. 

    Marvell’s growth is being driven by our success in our key data infrastructure end markets. Particularly, in looking at 5G wireless infrastructure we have seen 4 consecutive quarters of sequential growth. Right now, this is particularly pronounced in China, where 5G is being rolled out. But with other countries working on rollout plans, and 4 of the top 5 base station vendors as Marvell customers, the growth from 5G is just beginning.

    Marvell also has a large and growing data center business, with both enterprise on-prem datacenters and now the cloud. We announced last quarter that cloud is now over 10% of our revenue and growing fast. And the reason we are seeing strong growth is that we are producing the key storage and security products for cloud. This includes chips for huge multi-terabyte hard drives, where all cloud data is stored. It also includes our networking products, which doubled year-over-year. And finally, growth in this area includes Marvell’s custom products that came to us through a recent acquisition. This is how several of the larger datacenter operators like to buy chips. Where we build exactly what they want. 

    Watch the full interview here.

  • August 27, 2020

    How to Reap the Benefits of NVMe over Fabric in 2020

    By Todd Owens, Field Marketing Director, Marvell

    As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

    Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and -- offering 64,000 command queues with 64,000 commands per queue -- can provide much more scalability than other storage protocols.

  • August 19, 2020

    Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

    By Todd Owens, Field Marketing Director, Marvell

    Navigating Product Name Changes for Marvell Ethernet Adapters at HPE

    Hewlett Packard Enterprise (HPE) recently updated its product naming protocol for the Ethernet adapters in its HPE ProLiant and HPE Apollo servers. Its new approach is to include the ASIC model vendor’s name in the HPE adapter’s product name. This commonsense approach eliminates the need for model number decoder rings on the part of Channel Partners and the HPE Field team and provides everyone with more visibility and clarity. This change also aligns more with the approach HPE has been taking with their “Open” adapters on HPE ProLiant Gen10 Plus servers. All of this is good news for everyone in the server sales ecosystem, including the end user. The products’ core SKU numbers remain the same, too, which is also good.

  • August 18, 2020

    From Strong Awareness to Decisive Action: Meet Mr. QLogic

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    Marvell® Fibre Channel HBAs are getting a promotion and here is the announcement email -

    I am pleased to announce the promotion of “Mr. QLogic® Fibre Channel” to Senior Transport Officer, Storage Connectivity at Enterprise Datacenters Inc. Mr. QLogic has been an excellent partner and instrumental in optimizing mission critical enterprise application access to external storage over the past 20 years. When Mr. QLogic first arrived at Enterprise Datacenters, block storage was in a disarray and efficiently scaling out performance seemed like an unsurmountable challenge. Mr. QLogic quickly established himself as a go-to leader and trusted partner for enabling low latency access to external storage across disk and flash. Mr. QLogic successfully collaborated with other industry leaders like Brocade and Mr. Cisco MDS to lay the groundwork for a broad set of innovative technologies under the StorFusion™ umbrella. In his new role, Mr. QLogic will further extend the value of StorFusion by bringing awareness of Storage Area Network (SAN) congestion into the server, while taking decisive action to prevent bottlenecks that may degrade mission critical enterprise application performance.

    Please join me in congratulating QLogic on this well-deserved promotion.

  • August 12, 2020

    Put a Cherry on Top! Introducing FC-NVMe v2

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    Once upon a time, data centers confronted a big problem – how to enable business-critical applications on servers to access distant storage with exceptional reliability. In response, the brightest storage minds invented Fibre Channel. Its ultra-reliability came from being implemented on a dedicated network and buffer-to-buffer credits. For a real-life parallel, think of a guaranteed parking spot at your destination, and knowing it’s there before you leave your driveway. That worked fairly well. But as technology evolved and storage changed from spinning media to flash memory with NVMe interfaces, the same bright minds developed FC-NVMe. This solution delivered a native NVMe storage transport without necessitating rip-and-replace by enabling existing 16GFC and 32GFC HBAs and switches to do FC-NVMe. Then came a better understanding of how cosmic rays affect high-speed networks, occasionally flipping a subset of bits, introducing errors.

  • July 28, 2020

    Living on the Network Edge: Security

    By Alik Fishman, Director of Product Management, Marvell

    In our series Living on the Network Edge, we have looked at the trends driving Intelligence, Performance and Telemetry to the network edge. In this installment, let’s look at the changing role of network security and the ways integrating security capabilities in network access can assist in effectively streamlining policy enforcement, protection, and remediation across the infrastructure.

    Cybersecurity threats are now a daily struggle for businesses experiencing a huge increase in hacked and breached data from sources increasingly common in the workplace like mobile and IoT devices. Not only are the number of security breaches going up, they are also increasing in severity and duration, with the average lifecycle from breach to containment lasting nearly a year1 and presenting expensive operational challenges. With the digital transformation and emerging technology landscape (remote access, cloud-native models, proliferation of IoT devices, etc.) dramatically impacting networking architectures and operations, new security risks are introduced. To address this, enterprise infrastructure is on the verge of a remarkable change, elevating network intelligence, performance, visibility and security2.

  • July 23, 2020

    Telemetry: Can You See the Edge?

    By Suresh Ravindran, Senior Director, Software Engineering

    So far in our series Living on the Network Edge, we have looked at trends driving Intelligence and Performance to the network edge. In this blog, let’s look into the need for visibility into the network.

    As automation trends evolve, the number of connected devices is seeing explosive growth. IDC estimates that there will be 41.6 billion connected IoT devices generating a whopping 79.4 zettabytes of data in 20251. A significant portion of this traffic will be video flows and sensor traffic which will need to be intelligently processed for applications such as personalized user services, inventory management, intrusion prevention and load balancing across a hybrid cloud model. Networking devices will need to be equipped with the ability to intelligently manage processing resources to efficiently handle huge amounts of data flows.

  • July 22, 2020

    The Six Five – Insiders Edition: The New Marvell - Interview with Chris Koopmans

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Chris Koopmans, Executive Vice President of Marketing and Business Operations, recently joined Patrick Moorhead and Daniel Newman, hosts of the of The Six Five – Insiders Edition, to discuss the future of semiconductors and the critical role they’re playing in 5Gcloud, the automotive revolution, and the borderless enterprise.

    I highly encourage you to watch the video all the way to the end. Don’t have the time to catch the full episode? Skip to the closing commentary from the analysts where you’ll hear Pat and Dan discuss Marvell’s mission and focus on transforming the data infrastructure architecture of the future.  

    Watch the full episode here.

     

  • July 16, 2020

    The Need for Speed at the Edge

    By George Hervey, Principal Architect, Marvell

    Marvell Driving Network Intelligence and Processing to the Edge

    In the previous TIPS to Living on the Edge, we looked at the trend of driving network intelligence to the edge. With the capacity enabled by the latest wireless networks, like 5G, the infrastructure will enable the development of innovative applications. These applications often employ a high-frequency activity model, for example video or sensors, where the activities are often initiated by the devices themselves generating massive amounts of data moving across the network infrastructure. Cisco’s VNI Forecast Highlights predicts that global business mobile data traffic will grow six-fold from 2017 to 2022, or at an annual growth rate of 42 percent1, requiring a performance upgrade of the network.

  • July 08, 2020

    Driving Network Intelligence and Processing to the Edge

    By George Hervey, Principal Architect, Marvell

    Marvell Driving Network Intelligence and Processing to the Edge

    The mobile phone has become such an essential part of our lives as we move towards more advanced stages of the “always on, always connected” model. Our phones provide instant access to data and communication mediums, and that access influences the decisions we make and ultimately, our behavior.

    According to Cisco, global mobile networks will support more than 12 billion mobile devices and IoT connections by 2022.1 And these mobile devices will support a variety of functions. Already, our phones replace gadgets and enable services. Why carry around a wallet when your phone can provide Apple Pay, Google Pay or make an electronic payment? Who needs to carry car keys when your phone can unlock and start your car or open your garage door? Applications now also include live streaming services that enable VR/AR experiences and sharing in real time. While future services and applications seem unlimited to the imagination, they require next-generation data infrastructure to support and facilitate them.

  • June 03, 2020

    Matt Murphy Shares Marvell’s Transformation Journey and Market Insights on CNBC’s Mad Money

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Marvell President and CEO, Matt Murphy, sat down with Jim Cramer of CNBC’s Mad Money for a virtual chat about Marvell’s focus on data infrastructure opportunities. Jim opened the segment congratulating Matt on a spectacular quarter and the company’s 25th anniversary: “This is not the Marvell of 25 years ago or even 25 months ago.”

    With the company’s new brand identity on display, Matt explained: “It’s been a great 25-year celebration of the company this year. We’ve gone through a pretty substantial transformation really over the last 3-4 years…we set out to transform the company into a long-term player with a real focus around what we viewed as the data infrastructure opportunity and as you can see, as the strategy has played out over the last few years its extremely relevant in today’s environment with now major growth drivers of the company and our back in 5G, cloud, segments of enterprise and automotive.”

  • May 21, 2020

    Essential technology, done right – introducing the new Marvell

    By Chris Koopmans, EVP of Marketing and Business Operations

    Chris Koopmans

    Our transformation
    Today we launched Marvell’s new brand identity. In many ways it’s long overdue – as it represents the transformation journey we have been on over the past four years and reflects the new company we have already become.

    Often when a company embarks on a major change, one of the first things they focus on is their external image. They update their logo, web page, and collateral to resemble the brand they aspire to, and then work to make reality match the aspiration. But in our industry, a company’s brand is their reputation, and a reputation is earned. That’s why, at Marvell, we started with the hard part – we established our new strategy, transformed our business and revamped our culture first, and now we are revealing a new brand that reflects who we are. I believe that signifies our culture – focus on the substance first.

    I joined Marvell four years ago – just a few weeks prior to Matt taking over as CEO – so I’ve been on this journey every step of the way. I’ve had the privilege of holding a variety of leadership roles through the transformation, giving me a unique perspective on the company we have become. So, when Matt came to me last year with a new challenge – to lead our Marketing team through a rebuild of Marvell’s external image – I was thrilled about the opportunity.

  • April 02, 2020

    Taking care of our People and our Communities

    By Matt Murphy, Chairman and Chief Executive Officer, Marvell

    Matt Murphy President and Chief Executive Officer of Marvell

    The world as we knew it just a few months ago may never be quite the same again. This pandemic has created an extraordinary crisis that will test our systems and our values alike. It also reminds us of what we hold most dear – our personal connections with other people. As billions of us shelter in our homes around the clock – working, exercising, educating, and sharing the same space - it is an opportunity to appreciate quality time with our immediate family. But we are also missing our connections with loved ones who aren’t under the same roof, and worry about the health and safety of our more vulnerable relatives. We miss seeing our colleagues at work – not on a computer screen, but in hallways and over lunch. And we miss seeing friends in our communities, enjoying a meal at our favorite restaurant, shopping at the farmers market, and hiking in our now shuttered parks.

  • March 31, 2020

    Matt Murphy Shares Market Insights on CNBC’s Squawk Alley

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Marvell President and CEO, Matt Murphy, joined Jim Cramer and the team of CNBC’s Squawk Alley to talk about the impact of COVID-19 on the semiconductor market, including 5G, the broader business community and the company’s unwavering mission.

    Matt shared his thoughts on the critical role that semiconductor technology plays in the current world crisis, particularly given the excess load that has been put on every nation’s data infrastructure due to remote work. And, how semiconductors are the essential building blocks of the networks of the world – from 5G to cloud infrastructure to advanced interconnect products.

    “Companies are re-thinking their work force and footprint and 5G is a critical part of that. “5G will play a key role not only because of the bandwidth, improved data rates, but the lower latency and improved reliability of that network I think will be a big deal for remote work and different ways of working in the future.”

  • March 17, 2020

    The Next Generation of ThunderX Delivers Performance and Power Advantages to Cloud and HPC Server Markets

    By Gopal Hegde, Vice President and General Manager of the Server Processor Business Unit, Marvell

    The data centers of today have shifted from a focus on single thread performance to performance at rack scale with performance/watt, performance/$ and overall TCO being the key drivers to deployment. These data centers are making use of servers that are customized for specific workloads. The applications running on these servers are either based on open source software or controlled by the customers who deploy them. Marvell’s ThunderX2® server processor is a leading example of this evolution in the server market with deployments spanning across the cloud and HPC market segments with major customers like Microsoft Azure and the Astra Top 500 Supercomputer installation at Sandia National Laboratories.

  • February 25, 2020

    A Blueprint for Data Encryption in Distributed and Hybrid IT

    By Avishai Ziv, General Manager of Security Solutions Business Unit, Marvell

    Given a choice, few enterprises will change their security solutions and deployments, especially for data encryption. That’s because any change in data encryption can be a painful and daunting task. But in today’s world of growing threats to data security, is there really a choice?

    Risk is High

    Most of the data breaches we read about in the headlines, which have affected hundreds of millions of customers at prominent companies, have involved data that wasn’t encrypted. Unfortunately, such lax security is all too common. A recent McAfee survey of 12,000 companies that revealed only 9% encrypt their data at rest in the cloud, and only 1% are using customer-managed encryption keys.

  • January 21, 2020

    HPE Reseller Relies on Marvell FastLinQ & QLogic for I/O Connectivity

    By Graham Forrest, Server and Storage Practice Lead, Enterprise Group

    gforrest@dtpgroup.co.uk
    DTP Group, Leeds UK

    Being a trusted advisor to our customers means making sure what we recommend, configure and install works as expected. That’s why here at DTP, we recommend Marvell® QLogic® Fibre Channel and Marvell FastLinQ® Ethernet I/O for all our server and storage connectivity solutions.

    The DTP Group has over 30 years of experience in delivering technology and solutions to our customers. We only recommend HPE for server, storage and networking solutions because we know the technology, get great support from the HPE organization and trust the HPE brand. Within the HPE portfolio, there are technology choices that need to be considered, especially when it comes to I/O connectivity. HPE makes a variety Ethernet and Fibre Channel connectivity options available that they source from several different manufacturers. After evaluating all of them, the DTP PreSales and Technical teams have chosen to standardize on 10/25/50GbE based on Marvell FastLinQ technology and on 16GFC and 32GFC based on Marvell QLogic technology.

  • December 12, 2019

    Marvell and HPE Introduce Industry Standard Adapters for HPE ProLiant and Apollo Gen10 Plus Servers

    By Todd Owens, Field Marketing Director, Marvell

    Innovation can come in many forms. Sometimes it’s with a completely new technology, sometimes with updating an existing product and in other cases, just by changing the approach to how you acquire and deliver a product. It is the latter that is the latest innovation from Hewlett Packard Enterprise (HPE) when it comes to I/O connectivity.

    In conjunction with the launch of HPE ProLiant and Apollo Gen10 Plus servers, the HPE Server I/O Options team developed a new approach for sourcing and qualifying Ethernet adapters for these servers. Deploying what they call Industry Standard Adapters, HPE can now better meet the needs of their end customers with an increased number of options when it comes to firmware and driver updates for their Ethernet adapters in HPE servers. 

    Traditionally, HPE would source I/O technology from OEM suppliers such as Marvell, create customer model numbers and specifications for adapters and make firmware and drives only available from HPE. These “custom” adapters were often referred to as HPE-optimized. Starting with Gen10 Plus servers, HPE is eliminating the customization and using standard adapters that can work not only in HPE servers, but in others as well. Hence the term “Industry Standard Adapters.” 

    Marvell is glad to be a strategic partner of HPE, providing a wide variety of Marvell® FastLinQ® adapters that are fully qualified and supported by HPE on the HPE ProLiant Gen10 Plus servers. Below are the current offerings from Marvell for HPE Gen10 Plus servers.

    HPE Part Number Model Name Product Description
    P08437-B21 QL41132HLRJ HPE Ethernet 10Gb 2-port BASE-T QL41132HLRJ Adapter
    P10103-B21 QL41132HQRJ HPE Ethernet 10Gb 2-port BASE-T QL41132HQRJ OCP3 Adapter
    P21933-B21 QL41132HLCU HPE Ethernet 10Gb 2-port SFP+ QL41132HLCU Adapter
    P08452-B21 QL41132HQCU HPE Ethernet 10Gb 2-port SFP+ QL41132HQCU OCP3 Adapter
    P10094-B21 QL41134HLCU HPE Ethernet 10GbE 4-port SFP+ QL41134HLCU Adapter
    P22702-B21 QL41232HLCU HPE Ethernet 10/25Gb 2-port SFP28 QL41232HLCU Adapter
    P10118-B21 QL41232HQCU HPE Ethernet 10/25Gb 2-port SFP28 QL41232HQCU OCP3 Adapter

    With the new approach, HPE Gen10 Plus customers can see the Marvell model numbers in the HPE product description and identify the Marvell vendor and product IDs at server boot. Then they will be directed to Marvell for detailed specifications, user guides, technical briefs and even firmware and/or driver downloads.  The support model will not change and HPE will continue to provide level 1 - 3 support. The new approach will benefit HPE customers in a number of ways:

    • Transparency of the specific type of product and model information – eliminates having to use complex decoder aides or other documentation to determine real manufacturer.
    • Consistency in product/model naming on Operating System HCLs that saves time cross-checking I/O and operating system compatibility.
    • Faster time-to-market for firmware/driver enhancements and updates to get fixes when they are available from the manufacturer.
    • Ability to streamline maintenance process across multi-vendor server environments and get firmware/drivers direct from manufacturer websites.

    For those HPE ProLiant customers who prefer utilizing HPE-specific deployment software and utilities like HPE System Insight Manager (SIM), System Update Manager (SUM) or Service Pack for ProLiant (SPP), HPE will also make firmware and drivers available through their normal quarterly processes.  

    The Marvell portfolio for HPE ProLiant Gen10 Plus includes PCIe and OCP 3.0 form factor adapters in 1/10GBASE-T, 10Gb SFP+ and 10/25GbE SFP28 variants. All these adapters support Marvell’s industry-leading list of features and capabilities, including:

    • Universal RDMA: allows customers to run RDMA over Converged Ethernet (RoCE) or iWARP RDMA concurrently.
    • SmartAN™ technology: automatically sets bandwidth and forward error correction settings of the adapter to match the switch it is connected to and the cable type being used.
    • Redfish PLDM: provides integrated remote monitoring and management of thermal environment and other adapter data.
    • Marvell FastLinQ QCS Management Utility: available as GUI, CLI, VMware plug-in or PowerShell Kit for maximum flexibility.
    • DPDK Offload: up to 68Mpps bi-directional small packet acceleration.
    • SR-IOV offload for virtual server environments.
    • Tunnel Offloads: VXLAN, NVGRE, GENEVE.
    • NVMe over fabric: for high performance SDS/HCI or shared storage connectivity.
    • IEEE 1588 Energy Efficient Ethernet
    • PCIe 3.0 x8 and OCP 3.0 form factors

    For more details on Marvell’s FastLinQ adapters, download the family product brief here.

    For more information on Universal RDMA, SmartAN technology and other unique Marvell FastLinQ capabilities, visit our Follow the Wire Video library 

    here https://connect.marvell.com/hpe-videos.

  • November 05, 2019

    Marvell Completes Acquisition of Avera Semi

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Marvell Completes Acquisition of Avera Semi

     Today marks the close of the acquisition of Avera Semi. 

    Avera brings over two decades of expertise developing custom ASIC solutions for the infrastructure market, further enabling Marvell to offer a full suite of leading semiconductor solutions. With this acquisition, Marvell will provide the complete spectrum of product architectures spanning standard, semi-custom to full ASIC solutions. We are proud to offer world class custom ASIC design services to our OEM partners. 

    To learn more, read our latest press release:

    https://www.marvell.com/company/news/pressDetail.do?releaseID=11497.

  • September 19, 2019

    Marvell Completes Acquisition of Aquantia

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    marvell aquantia logoMarvell today announced that it has successfully completed its acquisition of Aquantia.

    Aquantia pioneered Multi-Gig technology – now the basis for high speed networking in a broad range of applications from enterprise campuses to autonomous cars.  Their portfolio complements Marvell’s industry-leading PHYs, switches and processors, creating an unparalleled networking platform and enabling customers to develop systems that span megabits to terabits per second. To learn more, read our latest press release https://www.marvell.com/company/news/pressDetail.do?releaseID=11257.

  • September 16, 2019

    Marvell’s Advanced Wireless Technology Among First to be Wi-Fi CERTIFIED 6™

    By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

    Wi-Fi Alliance® the industry alliance responsible for driving certification efforts worldwide to ensure interoperability and standards for Wi-Fi® devices, today announced Wi-Fi CERTIFIED 6™, the industry certification program based on the IEEE 802.11ax standard.  Marvell’s 88W9064 (4x4) and 88W9068 (8x8) Wi-Fi 6 solutions are among the first to be Wi-Fi 6 certified and have been selected to be included in the Wi-Fi Alliance interoperability test bed. 

    Wi-Fi CERTIFIED 6™ ensures interoperability and an improved user experience across all devices running IEEE 802.11ax technology.  Wi-Fi 6 benefits both the 5 and 2.4 GHz bands, incorporating major fundamental enhancements like Multi-User MIMO, OFDMA, 1024-QAM, BSS coloring and Target Wait Time. Wi-Fi 6 CERTIFIED Wi-Fi 6 delivers faster speeds with low latency, high network utilization, and power saving technologies that provide substantial benefits spanning all the way from high density enterprises to enabling battery operated low power IoT devices. 

    Marvell played a leading role in shaping Wi-Fi 6 and enabling Wi-Fi CERTIFIED 6 to ensure seamless interoperability and drive rapid adoption in the market place.  Wi-Fi Alliance forecasts that over 1.6 billion devices supporting Wi-Fi 6 will be shipped worldwide by 2020.  Marvell is at the forefront of this wave enabling our Wi-Fi CERTIFIED 6 products to be designed into exciting new products spanning infrastructure access, premium client and automotive markets. 

    For more information, you can visit www.marvell.com/wireless.

  • August 28, 2019

    Women Leaders Assemble in Silicon Valley to Talk About Gender Equality in Business

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Women Leaders Assemble in Silicon Valley Monday, August 26, marked the anniversary of the Nineteenth Amendment being signed into law and granting American women the constitutional right to vote.  To commemorate the impending 100-year celebration, the Silicon Valley Leadership Group kicked off a series of organized events meant to honor women trailblazers and host a call to women leaders from all walks of life.  Built into our core values around diversity and equality, empowering women to be leaders is important to Marvell and its executive leadership team, and the company was a proud sponsor of Monday’s event. 

    The day-long series of panel discussions celebrating the achievements of women since the suffrage movement took place at NASA’s Ames Research Center in Mountain View and was filled with some of the most accomplished women in Silicon Valley and the country. 

    Monday’s event featuring House Speaker Nancy Pelosi, the highest-ranking woman in U.S. politics and second in line to the presidency, Anna Eshoo, the Democratic congresswoman, whose district includes Moffett Field, and NASA astronaut, Megan McArthur, focused more broadly on women leadership and the state of gender equality in business. 

    The day included a panel called Women in Innovation and another named Women Leaders Across Generations, each moderated and filled with some of the top women at local and national companies like Silicon Valley Bank, Ripple, eBay, AT&T, Genentech, NASA, Marvell, Micron, and the San Francisco 49ers. Marvell’s Chief Compliance Officer, Regan MacPherson, moderated the latter during which science and tech executives used their own experiences to frame a conversation on how far women have come and how far they still must go.

  • July 25, 2019

    Marvell Recognized for Arm-based Server CPU Leadership

    By Kumar Sankaran, Senior Director, Product Management, Marvell

    Marvell has been selected by IT Brand Pulse’s 2019 IT pro voting as the leader in Arm®-based server CPUs. In a clean sweep across all categories, Marvell’s ThunderX® was voted as the leader in market, price, performance, reliability, service and support, and innovation. The results are based on an independent, non-sponsored survey given to IT professionals on server products. The survey is conducted once a year by IT Brand Pulse, a trusted source for research, data and analysis about data center infrastructure.

    Marvell® ThunderX® processors, based on the Armv8-A architecture, bring industry-leading compute and memory performance as well as technology innovation backed by a rich ecosystem of more than 70+ partners. As the most widely supported and deployed Arm-based server processor in the world, Marvell ThunderX processors power High-Performance Computing, Cloud and Edge applications. 

    Arm-based Server CPU Leadership

  • June 25, 2019

    Marvell Powers HPE Servers with Next-Generation Ethernet Technology

    By Todd Owens, Field Marketing Director, Marvell

    Data is the new currency for many businesses today.  The ability to access, analyze and act on data has become a competitive advantage for many companies.  While much attention is paid to the storage devices and compute required to optimize data processing, the I/O infrastructure is often overlooked.  The reality is that I/O technologies are just as important as the core count of the CPU or storage capacity and latency of the storage array.  The time is now to future-proof your network and that’s where Marvell is here to help. 

    Marvell has been a long-time Hewlett Packard Enterprise (HPE) supplier of I/O technology used in the HPE ProLiant, Apollo, HPE Synergy, and HPE Storage offerings.  Over the past year, a new generation of Ethernet I/O has begun making its way into these HPE platforms.  Based on Marvell® FastLinQ® QL41000 and QL45000 Ethernet technology, these new adapters are allowing HPE customers to “future-proof their network” connectivity for what’s to come in the near future of data centers. 

    The QL41000 and QL45000 adapter technology provides several new capabilities not found in other I/O offerings for HPE.  Advancements include:

    • Universal RDMA – provides customers with flexibility through concurrent support for both RDMA over Converged Ethernet (RoCE) and iWARP RDMA protocols on the same adapter.
    • SmartAN™ Technology – enables auto-negotiation for transitioning from 10GbE to 25GbE connections. SmartAN automatically configures the adapter for bandwidth and error code correction based on the cabling configuration and switch port settings the adapter is connected to.
    • Storage Offload – Converge Network Adapter offerings include full hardware offload for iSCSI and FCoE protocols which greatly reduces the CPU resources required for transmitting storage traffic compared to the use of software initiators.

      do more with marvell FastLinQ EthernetThese are in addition to enhanced DPDK performance (up to 36Mpps bi-directional) and support for SR-IOV, TCP/IP stateless offloads, IEEE 1588 time stamping and more.

    The FastLinQ 41000 series technology can be found in next-generation Flexible LOM Rack (FLR) and standup PCIe adapters for HPE ProLiant and Apollo servers. Models include:

    • HPE Ethernet 10Gb 521-T Adapter
    • HPE Ethernet 10Gb 524SFP+ Adapters
    • HPE Ethernet 10/25Gb 621SFP28 Adapter
    • HPE Ethernet 10/25Gb 622FLR CNA
    • HPE StoreFabric 10Gb CN1200-T CNA
    • HPE StoreFabric 10/25Gb CN1300R CNA

    marvell powers HPE servers with next generation ethernet technology 1 

    These adapters allow HPE Server customers to future-proof their Rack and Tower servers with RDMA for use in Hyper-Converged Infrastructure (HCI) and Software Defined Storage (SDS) solutions; and make the transition from 10GbE to 25GbE connectivity seamless at the server.  These I/O devices are ideal for customers considering Microsoft Azure Stack HCI or VMware vSAN environments, or the deployment of any latency sensitive application. 

    The FastLinQ 45000 series technology can be found in next-generation mezzanine adapters for HPE Synergy, including:

    • HPE Synergy 10/20/25Gb 4820C Adapter
    • HPE Synergy 25/50GbE 6810C Adapter

    marvell powers HPE servers with next generation ethernet technology 2

    With Universal RDMA support improved DPDK performance and high-bandwidth capability, these adapters are ideal for customers with VMware ESXi or Microsoft Hyper-V deployments, and for Telco or high-frequency trading applications. 

    Since many applications today will start to require more I/O performance and low latency RDMA, HPE’s next-gen Ethernet adapters will go a long way in future proofing networking connectivity for server customers. 

    For a complete list of Marvell FastLinQ Ethernet adapters for HPE Servers and the features they support, download our HPE FastLinQ Ethernet Quick Reference guide.  If you would like to discuss I/O technology or customer needs in more detail, contact our HPE team .  You can also visit the Marvell HPE microsite at www.marvell.com/hpe.

  • June 20, 2019

    Marvell’s ThunderX2 Server Ecosystem Expands with NVIDIA GPU and Software Support

    By Larry Wikelius, Vice President, Ecosystem and Partner Enabling, Marvell

    marvell paving the path to exascale

     ISC High Performance, which just wrapped up today in Frankfurt, Germany, is one of the most significant server events of the year and is often a catalyst for key major industry announcements.  This year’s event was no exception with NVIDIA announcing its support for servers based on the Arm architecture.  With this move, NVIDIA will make its full stack of AI and high-performance computing software available to the Arm ecosystem by the end of 2019.  The stack includes all NVIDIA CUDA-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools such as PGI compilers with OpenACC support and profilers. NVIDIA’s full software suite support will enable the acceleration of more than 600 HPC applications and AI frameworks on Marvell® ThunderX2® systems.

    NVIDIA’s support for Arm CPUs marks continued growth of the Arm-based server ecosystem.  Marvell has been a leading driver in the establishment of a standard, complete and competitive ecosystem around the Arm architecture ranging from low level firmware through system software to commercial ISV applications.  The Marvell ThunderX2 processor is the most widely deployed Arm server in the market today and the only Arm server on the prestigious top 500 super computer list with the Astra system at Sandia National Laboratories. 

    NVIDIA’s announcement underscores the growing momentum of Marvell ThunderX2 in both high-performance computing and cloud deployments.  The entire industry is very excited about the ability to combine the computational performance and memory bandwidth of ThunderX2 with the parallel processing capabilities of the GPU.  NVIDIA’s commitment to the complete software stack is particularly important and is yet another high value solution option in the broadly supported software offering on ThunderX2.  Most ThunderX2 systems have been designed with GPU support in mind from the beginning which enables a simple upgrade for today’s installed base. 

    Marvell welcomes NVIDIA to the ThunderX2 ecosystem and we look forward to working with customers on this exciting server solution.  See the press release here

    Read more about what the industry is saying about the announcement, and what this means to high-performance computing:

    Forbes – NVIDIA Gives Arm a Boost In AI And HPC 

    The Next Platform - Nvidia Makes Arm A Peer To X86 And Power For GPU Acceleration 

    HPCwire - Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

  • June 11, 2019

    Matt Murphy Talks Marvell Transformation, Four Deals and 5G with Jim Cramer of CNBC’s Mad Money

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    Marvell President and CEO Matt Murphy joined Jim Cramer, host of the widely acclaimed finance television program, Mad Money, for an engaging discussion on where Marvell has been and where the company is headed. 

    Jim was interested in learning more about the four deals that Marvell announced over the last month including the acquisitions of Aquantia and Avera, and the 5G growth opportunities moving forward. 

    In his interview with Jim, Matt highlighted the key technologies needed to play in the infrastructure market—processors, networking, storage and security—all of which Marvell has.  Aquantia helps Marvell strengthen its move to the connected car where a best-in-class network is needed to address the trends in autonomous vehicles, electrification, and safety/security, and the shift from analog interfaces to Ethernet technology. 

    With the acquisition of Avera, Marvell essentially double downs on 5G, Matt explains.  He emphasizes that the 5G cycle is just beginning —“not even in the first inning yet”— with the build-out of this infrastructure starting now and continuing robustly through 2020.  Avera’s biggest end market exposure is base stations and Avera will enable a new custom chip design business for Marvell. 

    To hear more from Matt and Jim’s discussion, and learn about how Marvell is at the infrastructure epicenter to address 5G, the cloud, AI, enterprise hardware and the connected car, click the below video.

     

     Marvell Technology CEO: We are 'extremely well positioned' for 5G from CNBC.

  • June 10, 2019

    Marvell Teams with Deloitte to Give Back to the Community

    By Stacey Keegan, Vice President, Corporate Marketing, Marvell

    The Marvell Finance team, in partnership with Deloitte, has undergone a significant transformation over the past three years to ensure the delivery of error-free financial reporting and analytics for the business. After successfully completing another 10Q filing on June 7, the Marvell Finance team and Deloitte celebrated by giving back to the local community and volunteering at Salvation Army. Proceeds from the donations sold in Salvation Army Family Stores are used to fund the Salvation Army’s Substance Abuse program in downtown San Jose. The program provides housing, work preparedness and rehabilitation free of charge to registered participants. 

    This event demonstrates Marvell’s commitment to enriching the communities where we live and work. Well done, team. 

    Marvell Teams with Deloitte

  • June 06, 2019

    Marvell Supports GSA Women’s Leadership Initiative – Join Us!

    By Regan MacPherson, Chief Compliance Officer

    Women today comprise 47% of the overall workforce; however, only 15% choose engineering.  As part of Marvell’s commitment to diversity and inclusion in the workplace, the company is proud to support the GSA Women’s Leadership Initiative (WLI) to make an impact for women in STEM moving forward.

    The GSA WLI seeks to significantly grow the number of women entering the semiconductor industry and increase the number of women on boards and in leadership positions. GSA Women's Leadership initiative

    As part of the initiative, which was announced yesterday, the GSA has established the WLI Council that will create and implement programs and projects towards meeting the WLI objectives.  WLI Council harnesses the leadership of women who have risen to the top ranks of the semiconductor industry.  Marvell’s own chief financial officer, Jean Hu, alongside 16 other women executives, will utilize their experiences to provide inspiration for and sponsorship of the next generation of female leaders.  

     “I am honored to be amongst a highly talented and diverse group of women at GSA WLI Council to help ensure that women are an integral part of the leadership of the semiconductor industry,” said Jean Hu, CFO of Marvell. “Marvell and GSA share a vision to elevate the women in STEM and support female entrepreneurs in their efforts to succeed in the tech industry.”  

     For more information on the GSA WLI, please visit https://www.gsaglobal.org/womens-leadership/. You can also join the Leadership group on LinkedIn to get involved.    

  • May 13, 2019

    FastLinQ® NICs + RedHat SDN

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    A bit of validation once in a while is good for all of us - that’s pretty true whether you are the one providing it or, conversely, the one receiving it.  Most of the time it seems to be me that is giving out validation rather than getting it.  Like the other day when my wife tried on a new dress and asked me, “How do I look?”  Now, of course, we all know there is only one way to answer a question like that - if they want to avoid sleeping on the couch at least. 

    Recently, the Marvell team received some well-deserved validation for its efforts.  The FastLinQ 45000/41000 high performance Ethernet Network Interface Controllers (NICs) series that we supply to the industry, which support 10/25/50/100GbE operation, are now fully qualified by Red Hat for Fast Data Path (FDP) 19.B. The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

    Figure 1: The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell

    Red Hat FDP is employed in an extensive array of the products found within the Red Hat portfolio - such as the Red Hat OpenStack Platform (RHOSP), as well as the Red Hat OpenShift Container Platform and Red Hat Virtualization (RHV).  Having FDP-qualification means that FastLinQ can now address a far broader scope of the open-source Software Defined Networking (SDN) use cases - including Open vSwitch (OVS), Open vSwitch with the Data Plane Development Kit (OVS-DPDK), Single Root Input/Output Virtualization (SR-IOV) and Network Functions Virtualization (NFV). Red hat logos The engineers at Marvell worked closely with our counterparts at Red Hat on this project, in order to ensure that the FastLinQ feature set would operate in conjunction with the FDP production channel. This involved many hours of complex, in-depth testing.  By being FDP 19.B qualified, Marvell FastLinQ Ethernet Adapters can enable seamless SDN deployments with RHOSP 14, RHEL 8.0, RHEV 4.3 and OpenShift 3.11. 

    Being widely recognized as the data networking ‘Swiss Army Knife,’ our FastLinQ 45000/41000 Ethernet adapters benefit from a highly flexible programmable architecture. This architecture is capable of delivering up to 68 million small packet per second performance levels, plus 240 SR-IOV virtual functions and supports tunneling while maintaining stateless offloads. As a result, customers have the hardware they need to seamlessly implement and manage even the most challenging of network workloads in what is becoming an increasingly virtualized landscape. Supporting Universal RDMA (concurrent RoCE, RoCEv2 and iWARP operation), unlike most competing NICs, they offer a highly scalable and flexible solution.  Learn more here

    SDN powered by FastLinQ NIC packet processing engine 

      Validation feels good. Thank you to the RedHat and Marvell team!

  • April 01, 2019

    Revolutionizing Data Center Architectures for the New Era in Connected Intelligence

    By George Hervey, Principal Architect, Marvell

    Though established, mega-scale cloud data center architectures were adequately able to support global data demands for many years, there is a fundamental change taking place.  Emerging 5G, industrial automation, smart cities and autonomous cars are driving the need for data to be directly accessible at the network edge.   New architectures are needed in the data center to support these new requirements including reduced power consumption, low latency and smaller footprints, as well as composable infrastructure. 

    Composability provides a disaggregation of data storage resources to bring a more flexible and efficient platform for data center requirements to be met.  But it does, of course, need cutting-edge switch solutions to support it.  Capable of running at 12.8Tbps, the Marvell® Prestera® CX 8500 Ethernet switch portfolio has two key innovations that are set to redefine data center architectures: Forwarding Architecture using Slices of Terabit Ethernet Routers (FASTER) technology and Storage Aware Flow Engine (SAFE) technology. 

    With FASTER and SAFE technologies, the Marvell Prestera CX 8500 family can reduce overall network costs by more than 50%; lower power, space and latency; and determine exactly where congestion issues are occurring by providing complete per flow visibility. 

    View the video below to learn more about how Marvell Prestera CX 8500 devices represent a revolutionary approach to data center architectures.

       

  • April 29, 2019

    RoCE or iWARP for Low Latency?

    By Todd Owens, Field Marketing Director, Marvell

    Today, Remote Direct Memory Access (RDMA) is primarily being utilized within high performance computing or cloud environments to reduce latency across the network.  Enterprise customers will soon require low latency networking that RDMA offers so that they can address a variety of different applications, such as Oracle and SAP, and also implement software-defined storage using Windows Storage Spaces Direct (S2D) or VMware vSAN.  There are three protocols that can be used in RDMA deployment: RDMA over InfiniBand, RDMA over Converged Ethernet (RoCE), and RDMA over iWARP.  Given that there are several possible routes to go down, how do you ensure you pick the right protocol for your specific tasks? 

    In the enterprise sector, Ethernet is by far the most popular transport technology.  Consequently, we can ignore the InfiniBand option, as it would require a forklift upgrade to the I/O existing infrastructure - thus making it way too costly for the vast majority of enterprise data centers.  So, that just leaves RoCE and iWARP.  Both can provide low latency connectivity over Ethernet networks.  But which is right for you? 

    Let’s start by looking at the fundamental differences between these two protocols.  RoCE is the most popular of the two and is already being used by many cloud hyper-scale customers worldwide.  RDMA enabled adapters running RoCE are available from a variety of vendors including Marvell. 

    RoCE provides latency at the adapter in the 1-5us range but requires a lossless Ethernet network to achieve low latency operation.  This means that the Ethernet switches integrated into the network must support data center bridging and priority flow control mechanisms so that lossless traffic is maintained.  It is likely they will therefore have to be reconfigured to use RoCE.  The challenge with the lossless or converged Ethernet environment is that configuration is a complex process and scalability can be very limited in a modern enterprise context.  iWARP is RDMA over TCP routable with standard ethernet, E2E Flow Control with TCP Congestion Avoidance Now it is not impossible to use RoCE at scale but to do so requires the implementation of additional traffic congestion control mechanisms, like Data Center Quantized Congestion Notification (DCQCN), which in turn calls for large, highly-experienced teams of network engineers and administrators.  Though this is something that hyper-scale customers have access to, not all enterprise customers can say the same.  Their human resources and financial budgets can be more limited. 

    Going back through the history of converged Ethernet environments, one must look no further than Fibre Channel over Converged Ethernet (FCoE) to see the size of the challenge involved.  Five years ago, many analysts and industry experts claimed FCoE would replace Fibre Channel in the data center.  That simply didn’t happen because of the complexity associated with using converged Ethernet networks at scale.  FCoE still survives, but only in closed environments like HPE BladeSystem or HPE Synergy servers, where the network properties and scale are carefully controlled.  These are single-hop environments with only a few connections in each system. 

    Finally, we come to iWARP.  This came on the scene after RoCE and has the advantage of running on today’s standard TCP/IP networks.  It provides latency at the adapter in the range of 10-15us.  This is higher than what one can achieve by implementing RoCE but is still orders of magnitude below that of standard Ethernet adapters. 

    They say, if all you have is a hammer, then everything looks like a nail.  That’s the same when it comes to vendors touting their RDMA-enabled adapters.  Most vendors only support one protocol, so of course that is the protocol they will recommend.  Here at Marvell, we are unique in that with our Universal RDMA technology, a customer can use both RoCE and iWARP on the same adapter.  This gives us more credibility when making recommendations and means that we are effectively protocol agnostic.  It is really important from a customer standpoint, as it means that we look at what is the best fit for their application criteria. 

    So which RDMA protocol do you use when?  Well, when latency is the number one criteria and scalability is not a concern, the choice should be RoCE.  You will see RoCE implemented as the back-end network in modern disk arrays, between the controller node and NVMe drives.  You will also find RoCE deployed within a rack or where there is only one or two top-of-rack switches and subnets to contend with.  Conversely, when latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate.  It runs on the existing network infrastructure and can easily scale between racks and even long distances across data centers.   A great use case for iWARP is as the network connectivity option for Microsoft Storage Spaces Direct implementations. 

    The good news for enterprise customers is that several Marvell® FastLinQ® Ethernet Adapters from HPE support Universal RDMA, so they can take advantage of low latency RDMA in the way that best suits them.  Here’s a list of HPE Ethernet adapters that currently support both RoCE and iWARP RDMA. Chart of All HPE Models and Part Numbers With RDMA-enabled adapters for HPE ProLiant, Apollo, HPE Synergy and HPE Cloudline servers, Marvell has a strong portfolio of 10Gb or 25GbE connectivity solutions for data centers.  In addition to supporting low latency RDMA, these adapters are also NVMe-ready.  This means they can accommodate NVMe over Ethernet fabrics running RoCE or iWARP, as well as supporting NVMe over TCP (with no RDMA).  They are a great choice for future-proofing the data center today for the workloads of tomorrow. 

    For more information on these and other Marvell I/O technologies for HPE, go to www.marvell.com/hpe

    If you’d like to talk with one of our I/O experts in the field, you’ll find contact info here.

  • March 06, 2019

    Composable Infrastructure: An Exciting New Prospect for Ethernet Switching

    By George Hervey, Principal Architect, Marvell

    The data center networking landscape is set to change dramatically.  More adaptive and operationally efficient composable infrastructure will soon start to see significant uptake, supplanting the traditional inflexible, siloed data center arrangements of the past and ultimately leading to universal adoption. 

    Composable infrastructure takes a modern software-defined approach to data center implementations.  This means that rather than having to build dedicated storage area networks (SANs), a more versatile architecture can be employed, through utilization of NMVe and NVMe-over-Fabric protocols. 

    Whereas previously data centers had separate resources for each key task, composable infrastructure enables compute, storage and networking capacity to be pooled together, with each function being accessible via a single unified fabric.  This brings far greater operational efficiency levels, with better allocation of available resources and less risk of over provisioning --- critical as edge data centers are introduced to the network, offering solutions for different workload demands. 

    Composable infrastructure will be highly advantageous to the next wave of data center implementations though the increased degree of abstraction that comes along presents certain challenges --- these are mainly in terms of dealing with acute network congestion --- especially in relation to multiple host scenarios. Serious congestion issues can occur, for example, when there are several hosts attempting to retrieve data from a particular part of the storage resource simultaneously.  Such problems will be exacerbated in larger scale deployments, where there are several network layers that need to be considered and the degree of visibility is thus more restricted. 

    There is a pressing need for a more innovative approach to data center orchestration.  A major streamlining of the network architecture will be required to support the move to composable infrastructure, with fewer network layers involved, thereby enabling greater transparency and resulting in less congestion. 

    This new approach will simplify data center implementations, thus requiring less investment in expensive hardware, while at the same time offering greatly reduced latency levels and power consumption. 

    Further, the integration of advanced analytical mechanisms is certain to be of huge value here as well --- helping with more effective network management and facilitating network diagnostic activities.  Storage and compute resources will be better allocated to where there is the greatest need. Stranded capacity will no longer be a heavy financial burden. 

    Through the application of a more optimized architecture, data centers will be able to fully embrace the migration to composable infrastructure.  Network managers will have a much better understanding of what is happening right down at the flow level, so that appropriate responses can be deployed in a timely manner.  Future investments will be directed to the right locations, optimizing system utilization.

  • February 20, 2019

    NVMe/TCP - Simplicity is the Key to Innovation

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    Whether it is the aesthetics of the iPhone or a work of art like Monet’s ‘Water Lillies’, simplicity is often a very attractive trait. I hear this resonate in everyday examples from my own life - with my boss at work, whose mantra is “make it simple”, and my wife of 15 years telling my teenage daughter “beauty lies in simplicity”. For the record, both of these statements generally fall upon deaf ears. 

    The Non-Volatile Memory over PCIe Express (NVMe) technology that is now driving the progression of data storage is another place where the value of simplicity is starting to be recognized. In particular with the advent of the NVMe-over-Fabrics (NVMe-oF) topology that is just about to start seeing deployment. The simplest and most trusted of Ethernet fabrics, namely Transmission Control Protocol (TCP), has now been confirmed as an approved NVMe-oF standard by the NVMe Group[1].

    All the NVMe fabrics currently availableFigure 1: All the NVMe fabrics currently available

    Just to give a bit of background information here, NVMe basically enables the efficient utilization of flash-based Solid State Drives (SSDs) by accessing it over a high-speed interface, like PCIe, and using a streamlined command set that is specifically designed for flash implementations. Now, by definition, NVMe is limited to the confines of a single server, which presents a challenge when looking to scale out NVMe and access it from any element within the data center. This is where NVMe-oF comes in. All Flash Arrays (AFAs), Just a Bunch of Flash (JBOF) or Fabric-Attached Bunch of Flash (FBOF) and Software Defined Storage (SDS) architectures will each be able to incorporate a front end that has NVMe-oF connectivity as its foundation. As a result, the effectiveness with which servers, clients and applications are able to access external storage resources will be significantly enhanced. 

    A series of ‘fabrics’ have now emerged for scaling out NVMe. The first of these being Ethernet Remote Direct Memory Access (RDMA) - in both its RDMA over Converged Ethernet (RoCE) and Internet Wide-Area RDMA Protocol (iWARP) derivatives. It has been followed soon after by NVMe-over-Fiber Channel (FC-NVMe), and then ones based on FCoE, Infiniband and OmniPath. 

    But with so many fabric options already out there, why is it necessary to come up with another one? Do we really need NVMe-over-TCP (NVMe/TCP) too? Well RDMA (whether it is RoCE or iWARP) based NVMe fabrics are supposed to deliver the extremely low level latency that NVMe requires via a myriad of different technologies - like zero copy and kernel bypass - driven by specialized Network Interface Controller (NICs). However, there are several factors which hamper this, and these need to be taken into account.

    • Firstly, most of the earlier fabrics (like RoCE/iWARP) have no existing install base for storage networking to speak of (Fiber Channel is the only notable exception to this). For example, of the 12 million 10GbE+ NIC ports currently in operation within enterprise data centers, less than 5% have any RDMA capability (according to my quick back of the envelope calculations).
    • The most popular RDMA protocol (RoCE) mandates a lossless network on which to run (and this in turn requires highly skilled network engineers that command higher salaries). Even then, this protocol is prone to congestion problems, adding to further frustration.
    • Finally, and perhaps most telling, the two RDMA protocols (RoCE and iWARP) are mutually incompatible.

    Unlike any other NVMe fabric, the pervasiveness of TCP is huge - it is absolutely everywhere. TCP/IP is the fundamental foundation of the Internet, every single Ethernet NIC/network out there supports the TCP protocol. With TCP, availability and reliability are just not issues to that need to be worried about. Extending the scale of NVMe over a TCP fabric seems like the logical thing to do. 

    NVMe/TCP is fast (especially if using Marvell FastLinQ 10/25/50/100GbE NICs – as they have a build-in full offload for NVMe/TCP), it leverages existing infrastructure and keeps things inherently simple. That is beautiful prospect for any technologist and is also attractive to company CIOs worried about budgets too. 

    So, once again, simplicity wins in the long run! 

    [1] https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/

  • November 06, 2018

    Enhanced Wireless Microcontroller Enables Affordable Design

    By Sree Durbha

    Today, we are at the peak of technology product availability with the releases of the new iPhone models, Alexa enabled devices and more. In the coming days, there will be numerous international consumer OEMs preparing new offerings as we approach the holiday selling season. Along with the smartphones, voice assistant enabled smart speakers and deep learning wireless security cameras, many devices and appliances are increasingly geared toward automating the home, the office and the factory. These devices are powered by application microcontroller units (MCUs) with embedded wireless connectivity to help users to remotely control and operate them via phone apps, voice or even through mere presence. This is part of an industry trend of pushing intelligence into everyday things. According to analyst firm Techno Systems Research1, this chipset market grew by more than 60% over the course of the last year and is likely to continue this high rate of growth. The democratization of wireless connectivity intellectual property and the continuing shift of semiconductor design and development to low cost regions is helping give rise to new industry players. In order to help customers differentiate in this highly competitive market, Marvell has announced the 88MW320/322 low-power Wi-Fi microcontroller SoC. This chipset is 100% pin-compatible and software compatible with the existing 88MW300/302 based designs. Although the newly released microcontroller is cost-optimized, there are several key hardware and software enhancements in this chipset. Support for extended industrial temperature operation, from -400 C through to 1050 C has been added. Unlike its predecessor, the 88MW320/322 can be implemented into more challenging application areas - such as LED lighting and industrial automation. No RF specific changes have been made within the silicon, so the minimum and maximum RF performance parameters remain the same as before. However, other fixes made have helped improve typical RF performance as reported by some of our customers when evaluating samples. Since there was no change in form, fit or function, the external RF interface remains the same as well. This enables customers to leverage existing 88MW300/302 module and device level regulatory certification on 88MW320/322. A hardware security feature has also been incorporated that allows customers to uniquely tie the chipset to the firmware running on it. This helps prevent counterfeit software to run on the chipset. This chipset is supported by the industry-leading Marvell EZ-Connect SDK for Apple’s new Advanced Development Kit (ADK) and Release 13 HomeKit Accessory Protocol SDK (R13 HAPSDK) with software-based authentication (SoftAuth), Amazon’s AWS IoT and other third-party cloud platforms. The Apple SoftAuth support now allows customers to avoid the cost and hassle of adding the MFi authentication chip, which was previously required to get HomeKit certification. On the applications side, we have added support for the Alexa Voice Services library. With MP3 decoder and OAUTH2 modules integrated on our SDK, our solution now allows customers to add an external audio-codec chipset to offer native voice command translation for basic product control functions. As previously announced, we continue to partner with Dialog Semiconductor to offer support for BLE with shared antenna managed coexistence software with our Wi-Fi on 88MW320/322. Several of our module vendor partners have announced support for this chipset in standalone and Wi-Fi + BLE combo configurations. You can find a complete list of modules supporting this chipset on the Marvell Wireless Microcontrollers page. The 88MW320/322 has been sampling to customers for a few months now and is currently shipping. The product comes in 68-pin QFN package (88MW320) and 88-pin QFN package (88MW322) formats. It is available in commercial, extended, industrial and extended industrial temperature ranges in both tray and tape and reel configurations. Watch this space for future announcements as we extend the availability of Marvell’s solutions for the smart home, office and factory to our customers through our catalog partners. The goal is to enable our wireless microcontroller solutions with easy to install one-click software that allows smaller customers to use our partner reference designs to develop their form factor proof of concept designs with hardware, firmware, middleware, cloud connectivity software, collateral and application support from a single source. This will free up their resources so that they can focus on what is most important to them - which is to work on application software and differentiation. The best is yet to come. As the industry demands solutions with higher levels of integration at ever lower power to allow for wireless products with several months and even years of battery life, you can count on Marvell to innovate to help meet customer needs. For example, the 802.11ax standard specification is not just for high efficiency and high throughput designs, it also offers provisions for low power, long battery life designs. 20MHz only channel operation in the 5GHz band and features such as target wake time (TWT), which helps extend the sleep cycle of devices; dual sub-carrier modulation (DCM), which helps extend the wireless range; uplink and downlink OFDMA, all contribute to make the next generation of devices worth waiting for.

      1. 2017 Wireless Connectivity Market Analysis, August, 2018

  • October 23, 2018

    Marvell Highlights Leadership in Infrastructure Semiconductor Solutions at Investor Day and Nasdaq

    By Marvell PR Team

    Marvell shared its mission and focus on driving the core technology to enable the global network infrastructure at its recent investor day. This was followed up with an appearance at Nasdaq, where Matt Murphy, president and CEO of the company, rang the bell to open the stock exchange. 

    Matt Murphy

    At both of these events in New York City, Marvell shared how far the company has come, where it was going, and reaffirmed its mission: To provide semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. 

    The world has become more connected and intelligent than ever, and the global network has also evolved at an astonishing rate. It’s imperative that the semiconductor industry advances even quicker to keep up with these new technology trends and stay relevant. Marvell recognizes that its customers, at the core or on the edge, face the daunting challenge of delivering solutions for this ever-changing world – today.

    With both the breadth and depth of technology expertise, Marvell offers the critical technology elements — storage, Ethernet, Arm® processors, security processors and wireless connectivity — to drive innovation in the industry.  With the Cavium acquisition, the company retains its strong and stable foothold while competing more aggressively and innovating faster to serve customers better.

     Nasdaq Tower

    For Marvell the future isn’t a distant challenge: it is here with us now, evolving at an accelerated pace. Marvell is enabling new technologies such as 5G, disrupting new Flash platform solutions for the data center, revolutionizing the in-car network, and developing new compute architectures for artificial intelligence, to name a few.

    NASDAQ

    Bringing the most complete infrastructure portfolio of any semiconductor company, Marvell is more than ready to continue on its amazing journey, and have its customers and partners alongside it on the cutting-edge—today, tomorrow and beyond.  

  • October 18, 2018

    Looking to Converge? HPE Launches Next Gen Marvell FastLinQ CNAs

    By Todd Owens, Field Marketing Director, Marvell

    Converging network and storage I/O onto a single wire can drive significant cost reductions in the small to mid-size data center by reducing the number of connections required. Fewer adapter ports means fewer cables, optics and switch ports consumed, all of which reduce OPEX in the data center. Customers can take advantage of converged I/O by deploying Converged Network Adapters (CNA) that provide not only networking connectivity, but also provide storage offloads for iSCSI and FCoE as well.

    Just recently, HPE has introduced two new CNAs based on Marvell® FastLinQ® 41000 Series technology. The HPE StoreFabric CN1200R 10GBASE-T Converged Network Adapter and HPE StoreFabric CN1300R 10/25Gb Converged Network Adapter are the latest additions in HPE’s CNA portfolio. These are the only HPE StoreFabric CNAs to also support Remote Direct Memory Access (RDMA) technology (concurrently with storage offloads).

    As we all know, the amount of data being generated continues to increase and that data needs to be stored somewhere. Recently, we are seeing an increase in the number of iSCSI connected storage devices in mid-market, branch and campus environments. iSCSI is great for these environments because it is easy to deploy, it can run on standard Ethernet, and there are a variety of new iSCSI storage offerings available, like Nimble and MSA all flash storage arrays (AFAs).

    One challenge with iSCSI is the load it puts on the Server CPU for storage traffic processing when using software initiators –  a common approach to storage connectivity. To combat this, Storage Administrators can turn to CNAs with full iSCSI protocol offload. Offloading transfers the burden of processing the storage I/O from the CPU to the adapter. Benefits of Adapter Offloads Figure 1: Benefits of Adapter Offloads 

    As Figure 1 shows, Marvell driven testing shows that CPU utilization using H/W offload in FastLinQ 10/25GbE adapters can reduce CPU utilization by as much as 50% compared to an Ethernet NIC with software initiators. This means less burden on the CPU, allowing you to add more virtual machines per server and potentially reducing the number of physical servers required. A small item like an intelligent I/O adapter from Marvell can provide a significant TCO savings.

    Another challenge has been the latency associated with Ethernet connectivity. This can now be addressed with RDMA technology. iWARP, RDMA over Converged Ethernet (RoCE) and iSCSI over Ethernet with RDMA (iSER) all allow for I/O transactions to be performed directly from the memory to the adapter, bypassing the software kernel in the user space of the O/S. This speeds transactions and reduces the overall I/O latency. The result is better performance and faster applications.

    The new HPE StoreFabric CNAs become the ideal devices for converging network and iSCSI storage traffic for HPE ProLiant and Apollo customers. The HPE StoreFabric CN1300R 10/25GbE CNA supports plenty of bandwidth that can be allocated to both the network and storage traffic. In addition, with support for Universal RDMA (support for both iWARP and RoCE) as well as iSER, this adapter provides significantly lower latency than standard network adapters for both the network and storage traffic. 

    The HPE StoreFabric 1300R also supports a technology Marvell calls SmartAN™, which allows the adapter to automatically configure itself when transitioning between 10GbE and 25GbE networks. This is key because at 25GbE speeds, Forward Error Correction (FEC) can be required, depending on the cabling used. To make things more complex, there are two different types of FEC that can be implemented. To eliminate all the complexity, SmartAN automatically configures the adapter to match the FEC, cabling and switch settings for either 10GbE or 25GbE connections, with no user intervention required.

    When budget is the key concern, the HPE StoreFabric CN1200R is the perfect choice. Supporting 10GBASE-T connectivity, this adapter connects to existing CAT6A copper cabling using RJ-45 connections. This eliminates the need for more expensive DAC cables or optical transceivers. The StoreFabric CN1200R also supports RDMA protocols (iWARP, RoCE and iSER) for lower overall latency.

    Why converge though? It’s all about a tradeoff between cost and performance. If we do the math to compare the cost of deploying separate LAN and storage networks versus a converged network, we can see that converging I/O greatly reduces the complexity of the infrastructure and can reduce acquisition costs by half. There are additional long-term cost savings also, associated with managing one network versus two. Eight Server Network Infrastructure Comparison Figure 2: Eight Server Network Infrastructure Comparison

    In this pricing scenario, we are looking at eight servers connecting to separate LAN and SAN environments versus connecting to a single converged environment as shown in figure 2.  Table 1: LAN/SAN versus Converged Infrastructure Price Comparison 

    The converged environment price is 55% lower than the separate network approach. The downside is the potential storage performance impact of moving from a Fibre Channel SAN in the separate network environment to a converged iSCSI environment. The iSCSI performance can be increased by implementing a lossless Ethernet environment using Data Center Bridging and Priority Flow Control along with RoCE RDMA. This does add significant networking complexity but will improve the iSCSI performance by reducing the number of interrupts for storage traffic.

    One additional scenario for these new adapters is in Hyper-Converged Infrastructure (HCI) implementations. With HCI, software defined storage is used. This means storage within the servers is shared across the network. Common implementations include Windows Storage Spaces Direct (S2D) and VMware vSAN Ready Node deployments. Both the HPE StoreFabric CN1200R BASE-T and CN1300R 10/25GbE CNAs are certified for use in either of these HCI implementations. FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node Figure 3: FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node 

    In summary, the new CNAs from the HPE StoreFabric group provide high performance, low cost connectivity for converged environments. With support for 10Gb and 25Gb Ethernet bandwidths, iWARP and RoCE RDMA and the ability to automatically negotiate changes between 10GbE and 25GbE connections with SmartAN™ technology, these are the ideal I/O connectivity options for small to mid-size server and storage networks.  To get the most out over your server investments, choose Marvell FastLinQ Ethernet I/O technology which is engineered from the start with performance, total cost of ownership, flexibility and scalability in mind. 

    For more information on converged networking, contact one our HPE experts in the field to talk through your requirements. Just use the HPE Contact Information link on our HPE Microsite at www.marvell.com/hpe.

  • October 17, 2018

    Marvell Demonstrates Edge Computing Powered by AWS Greengrass at Arm TechCon 2018

    By Maen Suleiman, Senior Software Product Line Manager, Marvell and Gorka Garcia, Senior Lead Engineer, Marvell Semiconductor, Inc.

    Thanks to the respective merits of its ARMADA® and OCTEON TX® multi-core processor offerings, Marvell is in a prime position to address a broad spectrum of demanding applications situated at the edge of the network. These applications can serve a multitude of markets that include small business, industrial and enterprise, and will require special technologies like efficient packet processing, machine learning and connectivity to the cloud. As part of its collaboration with Amazon Web Services® (AWS), Marvell will be illustrating the capabilities of edge computing applications through an exciting new demo that will be shown to attendees at Arm TechCon - which is being held at the San Jose Convention Center, October 16th-18th. 

    This demo takes the form of an automated parking lot. An ARMADA processor-based Marvell MACCHIATObin® community board, which integrates the AWS Greengrass® software, is used to serve as an edge compute node. The Marvell edge compute node receives video streams from two cameras that are placed at the entry gate and exit of the parking lot. The ARMADA processor-based compute node runs AWS Greengrass Core; executes two Lambda functions to process the incoming video streams and identify the vehicles entering the garage through their license plates; and subsequently checks whether the vehicles are authorized or unauthorized to enter the parking lot. 

    The first Lambda function will be running Automatic License Plate Recognition (OpenALPR) software and it obtains the license plate number and delivers it together with the gate ID (Entry/Exit) to a Lambda function running on the AWS® cloud that will access a DynamoDB® database. The cloud Lambda function will be responsible for reading the DynamoDB whitelist database and determines if the license plate belongs to an authorized car. This information will be sent back to a second Lambda function on the edge of the network, on the MACCHIATObin board, responsible for managing the parking lot capacity and opening or closing the gate. This Lambda function will be logging the activity in the edge to the AWS Cloud Elasticsearch® service, which works as a backend for Kibana®, an open source data visualization engine. Kibana will enable a remote operative to have direct access to information concerning parking lot occupancy, entry gate status and exit gate status.  Furthermore, the AWS Cognito service authenticates users for access to Kibana. AWS Cognito service     AWS Cloud Lambda function

    After the AWS Cloud Lambda function sends the verdict (allowed/denied) to the second Lambda function running on the MACCHIATObin board, this MACCHIATObin Lambda function will be responsible for communicating with the gate controller, which is comprised of a Marvell ESPRESSObin® board, and is used to open/close the gateway as required.

    The ESPRESSObin board runs as an AWS Greengrass IoT device that will be responsible for opening the gate according to the information received from the MACCHIATObin board’s second Lambda function. 

    This demo showcases the capabilities to run a machine learning algorithm using AWS Lambda at the edge to make the identification process extremely fast. This is possible through the high performance, low-power Marvell OCTEON TX and ARMADA multi-core processors. Marvell infrastructure processors’ capabilities have the potential to cover a range of higher-end networking and security applications that can benefit from the maturity of the Arm® ecosystem and the ability to run machine learning in a multi-core environment at the edge of the network.

    Those visiting the Arm Infrastructure Pavilion (Booth# 216) at Arm TechCon (San Jose Convention Center, October 16th-18th) will be able to see the Marvell Edge Computing demo powered by AWS Greengrass. 

    For information on how to enable AWS Greengrass on Marvell MACCHIATObin and Marvell ESPRESSObin community boards, please visit http://wiki.macchiatobin.net/tiki-index.php?page=AWS+Greengrass+on+MACCHIATObin and http://wiki.espressobin.net/tiki-index.php?page=AWS+Greengrass+on+ESPRESSObin.    

  • August 03, 2018

    Infrastructure Powerhouse: Marvell and Cavium become one!

    By Todd Owens, Field Marketing Director, Marvell

    Marvell and Cavium

    Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell. Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.

    infrastructure powerhouse

    For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now.  Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today. 

    Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.

       

    Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters 

    We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.

      iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs

    iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs 

    That sounds great, but what’s going to change over time?

    The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center. 

    Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.

    Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.

    In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.

  • August 03, 2018

    IOPs and Latency

    By Marvell PR Team

    Shared storage performance has significant impact on overall system performance. That’s why system administrators try to understand its performance and plan accordingly. Shared storage subsystems have three components: storage system software (host), storage network (switches and HBAs) and the storage array. 

    Storage performance can be measured at all three levels and aggregated to get to the subsystem performance. This can get quite complicated. Fortunately, storage performance can effectively be represented using two simple metrics: Input/Output operations per Second (IOPS) and Latency. Knowing these two values for a target workload, a user can optimize the performance of a storage system. 

    Let’s understand what these key factors are and how to use them to optimize of storage performance. 

    What is IOPS?

    IOPS is a standard unit of measurement for the maximum number of reads and writes to a storage device for a given unit of time (e.g. seconds). IOPS represent the number of transactions that can be performed and not bytes of data. In order to calculate throughput, one would have to multiply the IOPS number by the block size used in the IO. IOPS standard unit IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix. 

    What is a Latency?

    Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system. Latency IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix. 

    What is a Latency?

    Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system. 

    Find more about Marvell’s QLogic Fibre Channel adapter technology at:

    https://www.marvell.com/fibre-channel-adapters-and-controllers/qlogic-fibre-channel-adapters/

  • June 29, 2018

    Marvell Helps to Bring HD Video Capabilities to New Entry-Level Drone

    By Sree Durbha, Head of Smart-Connected Business, Marvell

    Drone The consumer drone market has expanded greatly over the last few years, with almost 3 million units shipped during 2017. This upward trend is likely to continue. Analyst firm Statista forecasts that the commercial drone business will be worth $6.4 billion annually by 2020, while Global Market Insights has predicted that the worldwide drone market will grow to $17 billion (with the consumer category accounting for $9 billion of that). As new products are continually being introduced into what is already an acutely overcrowded marketplace, a differentiated offering is therefore critical to a successful product. 

    One of the newest and most exciting entrants into this crowded drone market, Tello, features functionality that sets it apart from rival offerings. Tello is manufactured by Shenzhen-based start-up Ryze Tech, a subsidiary of well-known brand DJI, which is the world’s largest producer of drones and unmanned aerial vehicles (UAVs). With a 13 minute runtime, plus a flight distance of up to 100 meters, this is an extremely maneuverable and compact quadcopter drone. It weighs just 80 grams and can fit into the palm of a typical teenager’s hand (with dimensions of 98 x 92.5 x 41 millimeters). The two main goals of the Tello are fun and education. To that end, a smartphone App-based control provides a fun user interface for everyone, including young people, to play with. The educational goal is met through an easy to program visual layout that allows users to write their own code using the comprehensive software development kit (SDK) included in the package. What really distinguishes Tello from other drones, however, is the breadth of its imaging capabilities - and this is where engaging with Marvell has proven pivotal.

    Tello’s original drone design requirement called for livestreaming 720p MP4 format video, using its 5 Megapixel image sensor, back to the user’s smartphone or tablet even while traveling at its maximum speed of 8 meters/second. This called for interoperability testing with a broad array of smartphone and tablet models. Due to its small size, conserving battery life would be a key requirement, which meant ultra-low power consumption by Wi-Fi®. Underlying all of this was the singular requirement for a strong wireless connection to be maintained at all times. Finally, as is always the case, Wi-Fi would need to fit in the low bill of materials for the product.

    Initial discussions between technical teams at Ryze and Marvell revealed a perfect match between the features offered on the Marvell® 1x1 802.11n single-band Wi-Fi system-on-chip (SoC) and the Wi-Fi requirements for the Tello drone project. This chip was already widely adopted in the market and established itself as a proven solution for various customer applications, including video transmission in IP cameras, mobile routers, IoT gateways etc. Ryze chose this chipset, banking on its reliability while transmitting high-definition video over the air, exceptional RF performance over range while offering ultra-low power operation, all at a competitive price point. 

    Marvell’s Wi-Fi SoC is a highly integrated, single-band (2.4GHz) IC that delivers IEEE® 802.11b/g/n operation in a single spatial stream (1 SS) configuration. It incorporates a power amplifier (PA), a low noise amplifier (LNA) and a transmit/receive switch. Quality of Service (QoS) is guaranteed through the 802.11e standard implementation. The Wi-Fi SoC’s compliance with the 802.11i security protocol, plus built-in wired equivalent privacy (WEP) algorithms, enable 128-bit encryption of transmitted data, thereby protecting the data from being intercepted by third parties. All of these hardware features are supported by Marvell’s robust Wi-Fi software, which includes a small footprint and full featured Wi-Fi firmware tied in with the hardware level features. Specific features such as infrastructure mode operation were developed to enable the functionality desired by Ryze for the Tello. 

    Marvell’s industry-leading Wi-Fi technology has enabled an exciting new user experience in the Tello, at a level of sophistication that previously would only have been seen in expensive, professional-grade equipment. In order to bring this professional quality experience to an entry-level drone model meant that significant power, performance and cost barriers were overcome. As we enter the 802.11ax era of Wi-Fi industry transition, expect Marvell to launch first-to-market, ever more envelope-pushing, technological advances such as uplink OFDMA.    

  • June 07, 2018

    Versatile New Ethernet Switch Simultaneously Addresses Multiple Industry Sectors

    By Ran Gu, Marketing Director of Switching Product Line, Marvell

    Due to ongoing technological progression and underlying market dynamics, Gigabit Ethernet (GbE) technology with 10 Gigabit uplink speeds is starting to proliferate into the networking infrastructure across a multitude of different applications where elevated levels of connectivity are needed: SMB switch hardware, industrial switching hardware, SOHO routers, enterprise gateways and uCPEs, to name a few. The new Marvell® Link Street™ 88E6393X, which has a broad array of functionality, scalability and cost-effectiveness, provides a compelling switch IC solution with the scope to serve multiple industry sectors. 

    The 88E6393X switch IC incorporates both 1000BASE-T PHY and 10 Gbps fiber port capabilities, while requiring only 60% of the power budget necessitated by competing solutions. Despite its compact package, this new switch IC offers 8 triple speed (10/100/1000) Ethernet ports, plus 3 XFI/SFI ports, and has a built-in 200 MHz microprocessor. Its SFI support means that the switch can connect to a fiber module without the need to include an external PHY - thereby saving space and bill-of-materials (BoM) costs, as well as simplifying the design. It complies with the IEEE 802.1BR port extension standard and can also play a pivotal role in lowering the management overhead and keeping operational expenditures (OPEX) in check. In addition, it includes L3 routing support for IP forwarding purposes. Marvell Link Street 88E6393X Adherence to the latest time sensitive networking (TSN) protocols (such as 802.1AS, 802.1Qat, 802.1Qav and 802.1Qbv) enables delivery of the low latency operation mandated by industrial networks. The 256 entry ternary content-addressable memory (TCAM) allows for real-time, deep packet inspection (DPI) and policing of the data content being transported over the network (with access control and policy control lists being referenced). The denial of service (DoS) prevention mechanism is able to detect illegal packets and mitigate the security threat of DoS attacks. 

    The 88E6393X device, working in conjunction with a high performance ARMADA® network processing system-on-chip (SoC), can offload some of the packet processing activities so that the CPU’s bandwidth can be better focused on higher level activities. Data integrity is upheld, thanks to the quality of service (QoS) support across 8 traffic classes. In addition, the switch IC presents a scalable solution. The 10 Gbps interfaces provide non-blocking uplink to make it possible to cascade several units together, thus creating higher port count switches (16, 24, etc.). 

    This new product release features a combination of small footprint, lower power consumption, extensive security and inherent flexibility to bring a highly effective switch IC solution for the SMB, enterprise, industrial and uCPE space.  

  • May 31, 2018

    Why is 802.11ax a “must have” for the connected car?

    By Avinash Ghirnikar

    Imagine motoring along through busy, urban traffic in your new connected car that is learning, getting smarter, safer and more reliable as it is driving. Such a car is constantly gathering and generating all kinds of data that is intermittently and opportunistically being uploaded to the cloud. As more cars on the road feature advanced wireless connectivity, this exciting future will become commonplace. However, each car will need to share the network with potentially hundreds of other cars that might be in its vicinity. While such a use case could potentially rely on LTE/5G cellular technology, the costs associated with employing such a “licensed pipe” would be prohibitively expensive. In such situations, the new Wi-Fi® standard 802.11ax, also known as high efficiency wireless (HEW), will be a life saver for the automotive industry. The zettabytes of data that cars equipped with a slew of sensors will create in the years to come will all need to be uploaded to the cloud and data centers, enabling next-generation machine learning in order to make driving increasingly safe and predictable in the future. Uploading this data will, of course, need to be done both securely and reliably. 802.11ax connected cars The car - as an 802.11ax station (STA) - will also be to able upload data to an 802.11ax access point (AP) in the most challenging of wireless environments while sharing the network with other clients. The 802.11ax system will be able to do this via technologies like MU-MIMO and OFDMA (allowing for spatial, frequency and time reuse) which are new innovations that are part of this emerging standard. Today, STAs compete rather than effectively share the network and have to deal with the dreaded “circle of death”’ awaiting connectivity. This is because today’s wireless standard can often be in an all-or-nothing binary mode of operation due to constant competition. When coupled with other upcoming standards like 802.11ai, specifically fast initial link setup (FILS), this vision of cars uploading data to the cloud over Wi-Fi becomes a true reality, even in environments where the car is moving and likely hopping from one AP to another. While this “under the hood” upload use case is greatly enhanced by the 802.11ax standard from an infrastructure perspective, download of software and firmware into connected cars can also be transformed by this same standard. It is well known that the number of processors and electronic control units (ECUs) in car models is expected to increase dramatically. This, in turn, implies that the software/firmware content in these cars will likewise grow at exponential rates. Periodic firmware over-the-air (FOTA) updates will be required and, therefore, having a reliable and robust mechanism to support this will be vital for automobile manufacturers – potentially saving them millions of dollars in relation to servicing costs, etc.  Such is the pace of innovation and technological change these days that this can sometimes happen almost immediately after cars come off the assembly line. Take the example of a parking lot outside an auto plant containing hundreds of brand new cars requiring some of their software to be updated.  Here, too, 802.11ax can come to the rescue by making a mass update more efficient and reliable. This advantage will then carry forward for the rest of the lifespan of each car, since it can never be predicted what sort of wireless connectivity environment these cars will encounter. These could be challenging environments like garages, driveways, and maybe even parking decks. The modulation enhancements that 802.11ax delivers, coupled with MU-MIMO and OFDMA features, will ensure that the most efficient and reliable Wi-Fi pipe is always available for such a critical function. Given that a car can easily be on the road for close to a decade, having this functionality built in from day one would be a tremendous advantage and could enable significant cost savings. Again, accompanying technologies like Wake on Bluetooth® Low Energy and Bluetooth Low Energy Long Range will also play a pivotal role in ensuring this use case is realized from an overall end-to-end system standpoint. These two infrastructure type use cases are likely to be tremendous value-adds for the connected car and can justify the presence of 802.11ax, especially from an automobile manufacturers’ point of view. Even consumers are likely to see significant benefits in their vehicle dashboards where the mobile APs in their infotainment systems will be able to seamlessly connect to their latest smartphone handsets (which will themselves be 802.11ax capable within the 2019 timeframe). Use cases like Wireless Apple CarPlay®, Wireless Android Auto™ Projection, rear seat entertainment, wireless cameras, etc. will all be a breeze given the additional 30-40% throughput enhancement in 802.11ax (and the backward compatibility this standard has with previous Wi-Fi standards for such use cases to cooperatively coexist).  Just as in homes, the number of Wi-Fi endpoints in cars is also proliferating. The 802.11ax standard is the only well-designed path for an increasing number of endpoints and yet provides the best user experience. The 802.11ax as Release 1 (aka Wave 1) is well on its way to a concrete launch by the Wi-Fi Alliance in the second half of 2019. Products are already being sampled by silicon vendors - both on the AP and STA/mobile AP side - and interoperability testing is well underway. For all wireless system designers at OEMs and their Tier 1 suppliers, the 802.11ax Wi-Fi standard should be a goal, and especially for any product launch set for 2020 and beyond.  The time has come to begin future proofing for the impending arrival of 802.11ax infrastructure. The days of the wireless technology in your smartphone/home/enterprise and in your car belonging to different generations are long gone. Consumers demand that their cars now be an extension of their home/work environments and that all of these living spaces function as one. The 802.11ax is destined to be one of the key pillars of technology to make such a vision a reality. Marvell has been a pioneer in designing Wi-Fi/Bluetooth combo devices for the automotive market since the debut of such devices in cars in 2011. With actual development beginning almost a decade ago, Marvell’s automotive wireless portfolio has been honed to address key use cases over five generations of products, through working closely with OEMs, Tier 1s and Tier 2s. All the technologies needed to achieve the various use cases described above have been incorporated into Marvell’s fifth generation device. Coupled with Marvell’s offering for enterprise class, high-performance APs, Marvell remains committed to providing the automobile industry and car buyers with the best wireless connectivity experience -- encompassing use cases inside and outside of the car today, and well into the future.  
  • May 02, 2018

    Cavium FastLinQ Ethernet Adapters Available for HPE Cloudline Servers

    By Todd Owens, Field Marketing Director, Marvell

    Are you considering deploying HPE Cloudline servers in your hyper-scale environment? If you are, be aware that HPE now offers select Cavium ™ FastLinQ® 10GbE and 10/25GbE Adapters as options for HPE Cloudline CL2100, CL2200 and CL3150 Gen 10 Servers. The adapters supported on the HPE Cloudline servers are shown in table 1 below.

    Table 1: Cavium FastLinQ 10GbE and 10/25GbE Adapters for HPE Cloudline Servers 

    As today’s hyper-scale environments grow, the Ethernet I/O needs go well beyond providing basic L2 NIC connectivity. Faster processors, increase in scale, high performance NVMe and SSD storage and the need for better performance and lower latency have started to shift some of the performance bottlenecks from servers and storage to the network itself. That means architects of these environments need to rethink connectivity options. 

    While HPE already have some good I/O offerings for Cloudline from other vendors, having Cavium FastLinQ adapters in the portfolio increases the I/O features and capabilities available. Advanced features like Universal RDMA, SmartAN™, DPDK, NPAR and SR-IOV from Cavium, allow architects to design more flexible and scalable hyper-scale environments. 

    Cavium’s advanced feature set provides offload technologies that shift the burden of managing the I/O from the O/S and CPU to the adapter itself. Some of the benefits of offloading I/O tasks include:

    • Lower CPU utilization to free up resources for applications or more VM scalability
    • Accelerate processing of small-packet I/O with DPDK
    • Save time by automating adapter connectivity between 10GbE and 25GbE
    • Reduced latency through direct memory access for I/O transactions to increase performance
    • Network isolation and QoS at the VM level to improve VM application performance
    • Reduce TCO with heterogeneous management

    Cavium FastLinQ Adapter and HPE Cloudline Gen10 Server

    To deliver these benefits, customers can take advantage of some or all the advanced features in the Cavium FastLinQ Ethernet adapters for HPE Cloudline. Here’s a list of some of the technologies available in these adapters.

    Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline

    * Source; Demartek findings 
    Table 2: Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline

    Network Partitioning (NPAR) virtualizes the physical port into eight virtual functions on the PCIe bus. This makes a dual port adapter appear to the host O/S as if it were eight individual NICs. Furthermore, the bandwidth of each virtual function can be fine-tuned in increments of 500Mbps, providing full Quality of Service on each connection. SR-IOV is an additional virtualization offload these adapters support that moves management of VM to VM traffic from the host hypervisor to the adapter. This frees up CPU resources and reduces VM to VM latency. 

    Remote Direct Memory Access (RDMA) is an offload that routes I/O traffic directly from the adapter to the host memory. This bypasses the O/S kernel and can improve performance by reducing latency. The Cavium adapters support what is called Universal RDMA, which is the ability to support both RoCEv2 and iWARP protocols concurrently. This provides network administrators more flexibility and choice for low latency solutions built with HPE Cloudline servers. 

    SmartAN is a Cavium technology available on the 10/25GbE adapters that addresses issues related to bandwidth matching and the need for Forward Error Correction (FEC) when switching between 10Gbe and 25GbE connections. For 25GbE connections, either Reed Solomon FEC (RS-FEC) or Fire Code FEC (FC-FEC) is required to correct bit errors that occur at higher bandwidths. For the details behind SmartAN technology you can refer to the Marvell technology brief here

    Support for Data Plane Developer Kit (DPDK) offloads accelerate the processing of small packet I/O transmissions. This is especially important for applications in the Telco NFV and high-frequency trading environments.

     For simplified management, Cavium provides a suite of utilities that allow for configuration and monitoring of the adapters that work across all the popular O/S environments including Microsoft Windows Server, VMware and Linux. Cavium’s unified management suite includes QCC GUI, CLI and v-Center plugins, as well as PowerShell Cmdlets for scripting configuration commands across multiple servers. Cavum’s unified management utilities can be downloaded from www.cavium.com . 

    Gen10 servers. Each of the Cavium adapters shown in table 1 support all of the capabilities noted above and are available in standup PCIe or OCP 2.0 form factors for use in the HPE Cloudline Gen10 Servers. One question you may have is how do these adapters compare to other offerings for Cloudline and those offered in HPE ProLiant servers? For that, we can look at the comparison chart here in table 3.

       

    Table 3: Comparison of I/O Features by Ethernet Supplier 

    Given that Cloudline is targeted for hyper-scale service provider customers with large and complex networks, the Cavium FastLinQ Ethernet adapters for HPE Cloudline offer administrators much more capability and flexibility than other I/O offerings. If you are considering HPE Cloudline servers, then you should also consider Cavium FastLinQ as your I/O of choice.

  • April 30, 2018

    ARMADA 3720 SoC Enables Ground-Breaking Modular Router from CZ.NIC

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    Marvell ARMADA® embedded processors are part of another exciting networking solution for a crowdfunding project and are helping “power” the global open hardware and software engineering community as innovative new products are developed. CZ.NIC, an open source networking research team based in the Czech Republic, just placed its Turris MOX modular networking appliance on the Indiegogo® platform and has already obtained over $110,000 in financial backing. 

    MOX has a highly flexible modular arrangement. Central to this is a network processing module featuring a Marvell® ARMADA 3720 network processing system-on-chip (SoC). This powerful yet energy efficient 64-bit device includes dual Cortex®-A53 ARM® processor cores and an extensive array of high speed IOs (PCIe 2.0, 2.5 GbE, USB 3.0, etc.). The MOX Solution from CZ.NIC

    Figure 1: The MOX Solution from CZ.NIC 

    The MOX concept is simple to understand. Rather than having to procure a router with excessive features and resources that all add to the cost but actually prove to be superfluous, users can just buy a single MOX that can subsequently be extended into whatever form of network appliance a user needs. Attachment of additional modules means that specific functionality can be provided to meet exact user expectations. There is an Ethernet module that adds 4 GbE ports, a fiber module that adds fiber optic SFP connectivity, and an extension module that adds a mini PCIe connection. At a later stage, if requirements change, it is possible for that same MOX to be repurposed into a completely different appliance by adding appropriate modules. The MOX Add-On Modules - Base, Extension, Ethernet and SFP
     Figure 2: The MOX Add-On Modules - Base, Extension, Ethernet and SFP 

    The MOX units run on Turris OS, an open source operating system built on top of the extremely popular OpenWrt® embedded Linux® distribution (as supported by Marvell’s ARMADA processors). This gives the appliance a great deal of flexibility, allowing it to execute a wide variety of different networking functions that enable it to operate as an email server, web server, firewall, etc. Additional MOX modules are already under development and will be available soon. 

    This project follows on CZ.NIC’s previous crowdfunding campaign using Marvell’s ARMADA SoC processing capabilities for the Turris Omnia high performance open source router - which gained huge public interest and raised 9x its original investment target. Turris MOX underlines the validity of the open source software ecosystem that has been built up around the ARMADA SoC to help customers bring their ideas to life. 

    Click here to learn more on this truly unique Indiegogo campaign.

  • April 05, 2018

    VMware vSAN ReadyNode Recipes Can Use Substitutions

    By Todd Owens, Field Marketing Director, Marvell

    VMware vSAN ReadyNode Recipes Can Use Substitutions When you are baking a cake, at times you substitute in different ingredients to make the result better. The same can be done with VMware vSAN ReadyNode configurations or recipes. Some changes to the documented configurations can make the end solution much more flexible and scalable. VMware allows certain elements within a vSAN ReadyNode bill of materials (BOM) to be substituted. In this VMware BLOG, the author outlines that server elements in the bom can change including:
    • CPU
    • Memory
    • Caching Tier
    • Capacity Tier
    • NIC
    • Boot Device
    However, changes can only be made with devices that are certified as supported by VMware. The list of certified I/O devices can be found on VMware vSAN Compatibility Guide and the full portfolio of NICs, FlexFabric Adapters and Converged Network Adapters form HPE and Cavium are supported. If we zero in on the HPE recipes for vSAN ReadyNode configurations, here are the substitutions you can make for I/O adapters. Ok, so we know what substitutions we can make in these vSAN storage solutions. What are the benefits to the customer for making this change? There are several benefits to the HPE/Cavium technology compared to the other adapter offerings.
    • HPE 520/620 Series adapters support Universal RDMA – the ability to support both RoCE and IWARP RDMA protocols with the same adapter.
      • Why Does This Matter? Universal RDMA offers flexibility in choice when low-latency is a requirement. RoCE works great if customers have already deployed using lossless Ethernet infrastructure. iWARP is a great choice for greenfield environments as it works on existing networks, doesn’t require complexity of lossless Ethernet and thus scales infinitely better.
    • Concurrent Network Partitioning (NPAR) and SR-IOV
      • NPAR (Network Partitioning) allows for virtualization of the physical adapter port. SR-IOV Offloadmove management of the VM network from the Hypervisor (CPU) to the Adapter. With HPE/Cavium adapters, these two technologies can work together to optimize the connectivity for virtual server environments and offload the Hypervisor (and thus CPU) from managing VM traffic, while providing full Quality of Service at the same time.
    • Storage Offload
      • Ability to reduce CPU utilization by offering iSCSI or FCoE Storage offload on the adapter itself. The freed-up CPU resources can then be used for other, more critical tasks and applications. This also reduces the need for dedicated storage adapters, connectivity hardware and switches, lowering overall TCO for storage connectivity.
    • Offloads in general – In addition to RDMA, Storage and SR-IOV Offloads mentioned above, HPE/Cavium Ethernet adapters also support TCP/IP Stateless Offloads and DPDK small packet acceleration offloads as well. Each of these offloads moves work from the CPU to the adapter, reducing the CPU utilization associated with I/O activity. As mentioned in my previous blog, because these offloads bypass tasks in the O/S Kernel, they also mitigate any performance issues associated with Spectre/Meltdown vulnerability fixes on X86 systems.
    • Adapter Management integration with vCenter – All HPE/Cavium Ethernet adapters are managed by Cavium’s QCC utility which can be fully integrated into VMware v-Center. This provides a much simpler approach to I/O management in vSAN configurations.
    In summary, if you are looking to deploy vSAN ReadyNode, you might want to fit in a substitution or two on the I/O front to take advantage of all the intelligent capabilities available in Ethernet I/O adapters from HPE/Cavium. Sure, the standard ingredients work, but the right substitution will make things more flexible, scalable and deliver an overall better experience for your client.
  • April 02, 2018

    Understanding Today’s Network Telemetry Requirements

    By Tal Mizrahi, Feature Definition Architect, Marvell

    There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate. 

    The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks. 

    A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context. 

    At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.

    WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks

  • March 02, 2018

    Connecting Shared Storage – iSCSI or Fibre Channel

    By Todd Owens, Field Marketing Director, Marvell

    At Cavium, we provide adapters that support a variety of protocols for connecting servers to shared storage including iSCSI, Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC). One of the questions we get quite often is which protocol is best for connecting servers to shared storage? The answer is, it depends. 

    We can simplify the answer by eliminating FCoE, as it has proven to be a great protocol for converging the edge of the network (server to top-of-rack switch), but not really effective for multi-hop connectivity, taking servers through a network to shared storage targets. That leaves us with iSCSI and FC. 

    Typically, people equate iSCSI with lower cost and ease of deployment because it works on the same kind of Ethernet network that servers and clients are already running on. These same folks equate FC as expensive and complex, requiring special adapters, switches and a “SAN Administrator” to make it all work. 

    This may have been the case in the early days of shared storage, but things have changed as the intelligence and performance of the storage network environment has evolved. What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability. For this blog, I’m going to focus on these three criteria and compare 10Gb Ethernet (10GbE) with iSCSI hardware offload and 16Gb Fibre Channel (16GFC). 

    Before we crunch numbers, let me start by saying that shared storage requires a dedicated network, regardless of the protocol. The idea that iSCSI can be run on the same network as the server and client network traffic may be feasible for small or medium environments with just a couple of servers, but for any environment with mission-critical applications or with say four or more servers connecting to a shared storage device, a dedicated storage network is strongly advised to increase reliability and eliminate performance problems related to network issues. 

    Now that we have that out of the way, let’s start by looking at the cost difference between iSCSI and FC. We have to take into account the costs of the adapters, optics/cables and switch infrastructure. Here’s the list of Hewlett Packard Enterprise (HPE) components I will use in the analysis. All prices are based on published HPE list prices.

      list of Hewlett Packard Enterprise (HPE) component pricesNotes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port 

    So if we do the math, the cost per port looks like this: 

    10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471 

    10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006 

    16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578 

    So iSCSI is the lowest price if DAC cables are used. Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration. When using fiber optic cables, the 16GFC configuration is lower cost. 

    So what are the trade-offs with DAC versus SFP+ options? It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective. Also, DAC cables are bulky, and when trying to cable more than 10 ports or more, the cable bundles can become unwieldy to deal with. 

    On the performance side, let’s look at the differences. iSCSI adapters have impressive specifications of 10Gbps bandwidth and 1.5Million IOPS which offers very good performance. For FC, we have 16Gbps of bandwidth and 1.3Million IOPS. So FC has more bandwidth and iSCSI can deliver slightly more transactions. Well, that is, if you take the specifications at face value. If you dig a little deeper here’s some things we learn:

    • 16GFC delivers full line-rate performance for block storage data transfers. Today’s 10GbE iSCSI runs on the Ethernet protocol with Data Center Bridging (DCB) which makes this a lossless transmission protocol like FC. However the iSCSI commands are transferred via Transmission Control Protocol (TCP)/IP which add significant overhead to the headers of each packet. Because of this inefficiency, the actual bandwidth for iSCSI traffic is usually well below the stated line rate. This gives16GFC the clear advantage in terms of bandwidth performance.
    • iSCSI provides the best IOPS performance for block sizes below 2K. Figure 1 shows IOPS performance of Cavium iSCSI with hardware offload. Figure 2 shows IOPS performance of Cavium’s QLogic 16GFC adapter and you can see better IOPS performance for 4K and above, when compared to iSCSI.
    • Latency is an order of magnitude lower for FC compared to iSCSI. Latency of Brocade Gen 5 (16Gb) FC switching (using cut-through switch architecture) is in the 700 nanoseconds range and for 10GbE it is in the range of 5 to 50 microseconds. The impact of latency gets compounded with iSCSI should the user implement 10GBASE-T connections in the iSCSI adapters. This adds another significant hit to the latency equation for iSCSI.

    Figure 1: Cavium’s iSCSI Hardware Offload IOPS Performance  

    Figure 2: Cavium’s QLogic 16Gb FC IOPS performance 

    If we look at manageability, this is where things have probably changed the most. Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance. The same management approach applies to the iSCSI network and can be done by the same network administrators. 

    On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console. 

    Another way to look at manageability is in the number of administrators on staff. In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices. The point is it doesn’t take an army to manage a SAN with today’s FC management software from vendors like Brocade. 

    So what is the right answer between FC and iSCSI? Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into? And if cost is the key criteria, iSCSI with DAC or 10GBASE-T connection is a good choice, understanding the tradeoff in latency and bandwidth performance. 

    So in very general terms, FC is the best choice for enterprise customers who need high performance, mission-critical capability, high reliability and scalable shared storage connectivity. For smaller customers who are more cost sensitive, iSCSI is a great alternative. iSCSI is also a good protocol for pre-configure systems like hyper-converged storage solutions to provide simple connectivity to existing infrastructure. 

    As a wise manager once told me many years ago, “If you start with the customer and work backwards, you can’t go wrong.” So the real answer is understand what the customer needs and design the best solution to meet those needs based on the information above.

  • February 22, 2018

    Marvell to Demonstrate CyberTAN White Box Solution Incorporating the Marvell ARMADA 8040 SoC Running Telco Systems NFVTime Universal CPE OS at Mobile World Congress 2018

    By Maen Suleiman, Senior Software Product Line Manager, Marvel

    As more workloads are moving to the edge of the network, Marvell continues to advance technology that will enable the communication industry to benefit from the huge potential that network function virtualization (NFV) holds. At this year’s Mobile World Congress (Barcelona, 26th Feb to 1st Mar 2018), Marvell, along with some of its key technology collaborators, will be demonstrating a universal CPE (uCPE) solution that will enable telecom operators, service providers and enterprises to deploy needed virtual network functions (VNFs) to support their customers’ demands. 

    The ARMADA® 8040 uCPE solution, one of several ARMADA edge computing solutions to be introduced to the market, will be located at the Arm booth (Hall 6, Stand 6E30) and will run Telco Systems NFVTime uCPE operating system (OS) with two deployed off-the-shelf VNFs provided by 6WIND and Trend Micro, respectively, that enable virtual routing and security functionalities.  The CyberTAN white box solution is designed to bring significant improvements in both cost effectiveness and system power efficiency compared to traditional offerings while also maintaining the highest degrees of security.

      CyberTAN white box solution incorporating Marvell ARMADA 8040 SoC

    CyberTAN white box solution incorporating Marvell ARMADA 8040 SoC   The CyberTAN white box platform is comprised of several key Marvell technologies that bring an integrated solution designed to enable significant hardware cost savings. The platform incorporates the power-efficient Marvell® ARMADA 8040 system-on-chip (SoC) based on the Arm Cortex®-A72 quad-core processor, with up to 2GHz CPU clock speed, and Marvell E6390x Link Street® Ethernet switch on-board. The Marvell Ethernet switch supports 10G uplink and 8 x 1GbE ports along with integrated PHYs, four of which are auto-media GbE ports (combo ports).

     The CyberTAN white box benefits from the Marvell ARMADA 8040 processor’s rich feature set and robust software ecosystem, including:

    • both commercial and industrial grade offerings
    • dual 10G connectivity, 10G Crypto and IPSEC support
    • SBSA compliancy
    • Arm TrustZone support
    • broad software support from the following: UEFI, Linux, DPDK, ODP, OPTEE, Yocto, OpenWrt, CentOS and more

    In addition, the uCPE platform supports Mini PCI Express (mPCIe) expansion slots that can enable Marvell advanced 11ac/11ax Wi-Fi or additional wired/wireless connectivity, up to 16GB DDR4 DIMM, 2 x M.2 SATA, one SATA and eMMC options for storage, SD and USB expansion slots for additional storage or other wired/wireless connectivity such as LTE. 

    At the Arm booth, Telco Systems will demonstrate its NFVTime uCPE operating system on the CyberTAN white box, with zero-touch provisioning (ZTP) feature. NFVTime is an intuitive NFVi-OS that facilitates the entire process of deploying VNFs onto the uCPE, and avoids the complex and frustrating management and orchestration activities normally associated with putting NFV-based services into action. The demonstration will include two main VNFs:

    • A 6WIND virtual router VNF based on 6WIND Turbo Router which provides high performance, ready-to-use virtual routing and firewall functionality; and
    • A Trend Micro security VNF based on Trend Micro’s Virtual Function Network Suite (VNFS) that offers elastic and high-performance network security functions which provide threat defense and enable more effective and faster protection.

    Please contact your Marvell sales representative to arrange a meeting at Mobile World Congress or drop by the Arm booth (Hall 6, Stand 6E30) during the conference to see the uCPE solution in action.

  • February 20, 2018

    If You're Not Using Intelligent I/O Adapters, You Should Be!

    By Todd Owens, Field Marketing Director, Marvell

    Like a kid in a candy store, choose I/O wisely. 

    Remember as a child, a quick stop to the convenience store, standing in front of the candy aisle your parents saying, “hurry and pick one.” But with so many choices, the decision was often confusing. With time running out, you’d usually just grab the name-brand candy you were familiar with. But what were you missing out on? Perhaps now you realize there were more delicious or healthy offerings you could have chosen. 

    I use this as an analogy to discuss the choice of I/O technology for use in server configurations. There are lots of choices and it takes time to understand all the differences. As a result, system architects in many cases just fall back to the legacy name-brand adapter they have become familiar with. Is this the best option for their client though? Not always. Here’s some reasons why. 

    Some of today’s Ethernet adapters provide added capabilities that I refer to as “Intelligent I/O”. These adapters utilize a variety of offload technology and other capabilities to take on tasks associated with I/O processing that are typically done in software by the CPU when using a basic “standard” Ethernet adapter. Intelligent offloads include things like SR-IOV, RDMA, iSCSI, FCoE or DPDK. Each of these offloads the work to the adapter and, in many cases, bypasses the O/S kernel, speeding up I/O transactions and increasing performance.

    As servers become more powerful and get packed with more virtual machines, running more applications, CPU utilizations of 70-80% are now commonplace. By using adapters with intelligent offloads, CPU utilization for I/O transactions can be reduced significantly, giving server administrators more CPU headroom. This means more CPU resources for applications or to increase the VM density per server.

    Another reason is to mitigate performance impact to the Spectre and Meltdown fixes required now for X86 server processors. The side channel vulnerability known as Spectre and Meltdown in X86 processors required kernel patches from the CPU vendor. These patches can have a significantly reduce CPU performance. For example, Red Hat reported the impact could be as much as a 19% performance degradation. That’s a big performance hit.

    Storage offloads and offloads like SR-IOV, RDMA and DPDK all bypass the O/S kernel. Because they bypass the kernel, the performance impacts of the Spectre and Meltdown fixes are bypassed as well. This means I/O transactions with intelligent I/O adapters are not impacted by these fixes, and I/O performance is maximized.

    offloads reduce impacts of Meltdown patches

    Finally, intelligent I/O can play a role in reducing cost and complexity and optimizing performance in virtual server environments. Some intelligent I/O adapters have port virtualization capabilities. Cavium Fibre Channel HBAs implement N-port ID Virtualization, or NPIV, to allow a single Fibre Channel port appear as multiple virtual Fibre Channel adapters to the hypervisor. For Cavium FastLinQ Ethernet Adapters, Network Partitioning, or NPAR, is utilized to provide similar capability for Ethernet connections. Up to eight independent connections can be presented to the host O/S making a single dual-port adapter look like 16 NICs to the operating system. Each virtual connection can be set to specific bandwidth and priority settings, providing full quality of service per connection.

    The advantage of this port virtualization capability is two-fold. First, the number of cables and connections to a server can be reduced. In the case of storage, four 8Gb Fibre Channel connections can be replaced by a single 32Gb Fibre Channel connection. For Ethernet, eight 1GbE connections can easily be replaced by a single 10GbE connection and two 10GbE connections can be replaced with a single 25GbE connection, with 20% additional bandwidth to spare.

    At HPE, there are more than fifty 10Gb-100GbE Ethernet adapters to choose from across the HPE ProLiant, Apollo, BladeSystem and HPE Synergy server portfolios. That’s a lot of documentation to read and compare. Cavium is proud to be a supplier of eighteen of these Ethernet adapters, and we’ve created a handy quick reference guide to highlight which of these offloads and virtualization features are supported on which adapters. View the complete Cavium HPE Ethernet Adapter Quick Reference guide here.

    For Fibre Channel HBAs, there are fewer choices (only nineteen), but we make a quick reference document available for our HBA offerings at HPE as well. You can view the Fibre Channel HBA Quick Reference here.

    In summary, when configuring HPE servers, think twice before selecting your I/O device. Choose an Intelligent I/O Adapter like those from HPE and Cavium. Cavium provides the broadest portfolio of intelligent Ethernet and Fibre Channel adapters for HPE Gen9 and Gen10 Servers and they support most, if not all, of the features mentioned in this blog. The best news is that the HPE/Cavium adapters are offered at the same or lower price than other products with fewer features. That means with HPE and Cavium I/O, you get more for less, and it just works too!

  • February 05, 2018

    Marvell SoC Technology Underpins Powerful pfSense Secure Gateway

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    Marvell’s ground-breaking ARMADA® 38x processor series continues to see momentum in integration into new network and security designs. Most recently, the ARMADA 385 processor has been incorporated into Netgate’s new SG-3100 product offering. 

    Netgate’s objective with the SG-3100 was to bring to market an entry-level secure gateway solution that offered substantially more horsepower than competing products in the same price range. The target criteria for the new design were:

    • Significantly greater performance
    • A broader range of functionality
    • Flexible configuration to suit customers’ particular needs

    Marvell’s engineering team was pleased to collaborate with Netgate on this ambitious project.

     Netgate SG-3100 powered by Marvell ARMADA 385 Processor 

                                                  Figure 1: Netgate SG-3100 powered by Marvell ARMADA 385

     Processor   The SG-3100 exhibits a high degree of flexibility and can be employed as a security firewall, LAN, router or WAN router, or VPN solution. It can also act as a DHCP server or DNS server, as well as providing intrusion detection system (IDS) and intrusion prevention system (IPS) capabilities. This extremely configurable unit comes equipped with 8GB eMMC Flash data storage or two m.2 SATA-based solid-state drives (SSDs), and also supports the LTE standard. Thanks to its Marvell® 88E6141 4-port switched LAN interface, the compact, cost-effective product easily facilitates bridging multiple wired and wireless networks. 

    Several factors drove Netgate’s decision to use Marvell’s ARMADA 385, starting with the ARMADA 38x ecosystem, which includes the ARMADA 38x ClearFog community board from SolidRun, and the ARMADA 38x FreeBSD port developed by Semihalf. Additionally, an increasing number of pfSense users had requested access to a board that provided three Ethernet ports, especially for dual-WAN operation. The ARMADA 385’s extensive embedded connectivity satisfies this need. 

    Based on the Arm® Cortex®-A9 topology, the ARMADA 385 system-on-chip (SoC) at the heart of the SG-3100 provides highly effective, dual-core processing capabilities. The SoC has a total of three Ethernet ports - two that support 1 Gbps data rates and a third capable of supporting either 2.5 Gbps or 1 Gbps. In the SG-3100 design, the ARMADA 385 is accompanied by Marvell’s 88E6141 multi-port Ethernet switch, which also supports 2.5Gbps operation through one of its ports. 

    The Netgate SG-3100 runs at 1.6GHz and is ideal for small offices and domestic environments. And thanks to the constituent IC technology, this solution packs serious throughput at a very compelling price.  

  • January 29, 2018

    Marvell Named a Top 100 Global Innovator for the Sixth Consecutive Year

    By Kelvin Vivian, Director of Intellectual Property, Marvell

    The term ‘innovation’ is frequently used in business today. For many, the term means providing ideas2017 Top 100 Global Innovator logo out of the blue, which lead to mind blowing discoveries and achievements.

     While this might be the perceived outcome of innovation, the reality is that true innovation can arise in a variety of different forms and can have any sizeable impact, and a healthy dose of creativity and idea sharing must be encouraged if businesses are to effectively harness the innovative potential of its employees. 

    At Marvell, we pride ourselves on working together collaboratively and creatively and this enables employees to be the most innovative versions of themselves, and such is what largely contributes to our sixth consecutive year of inclusion in the Clarivate Analytics Top 100 Global Innovators list.

    Placement on the list has become the standard measure for innovation across the world and is recognized as a significant achievement. The award itself is based in part on global reach — we hold more than 9,000 patents worldwide — grant success rates and influence of patented technology, and it serves as a testament to Marvell’s culture of innovation and commitment to providing differentiated, breakthrough technology solutions. 

    While inclusion on this list provides a celebratory point of reflection for all of us at Marvell, it also is a reminder of the work that lays ahead of us and our colleagues across industry who, while competing, also share a common passion and goal, which simply put – is to make technology that makes life better. And in today’s market especially, it’s more important than ever that we and our partners continue to push the boundaries of innovation at every turn. As the physicist Albert Einstein said, “You can’t solve a problem on the same level that it was created. You have to rise above it to the next level.” 

    So while we extend our congratulations to colleagues and competitors alike, without whom there would be no yardstick to measure ourselves by and no goal to aim for; we can’t wait to see what new innovations and types of critical and creative thinking this year will bring. 

    See you all on the other side!

  • January 23, 2018

    New Brew: Latest MACCHIATObin Community Boards are Able to Address Much Wider Scope of Developer Requirements

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    Following the success of the MACCHIATObin® development platform, which was released back in the spring, Marvell and technology partner SolidRun have now announced the next stage in the progression of this hardware offering. After drawing on the customer feedback received, a series of enhancements to the original concept have subsequently been made, so that these mini-ITX boards are much more optimized for meeting the requirements of engineers.

    Marvell and SolidRun announce the availability of two new MACCHIATObin products that will supersede the previous release. They are the MACCHIATObin Single Shot and the MACCHIATObin Double Shot boards.

    As before, these mini-ITX format networking community boards both feature the powerful processing capabilities of Marvell’s ARMADA® 8040 system-on-chip (SoC) and stay true to the original objective of bringing an affordable Arm-based development resource with elevated performance to the market. However, now engineers have a choice in terms of how much supporting functionality comes with it - thus making the platform even more attractive and helping to reach a much wider audience. MACCHIATObin Single Shot (left) and MACCHIATObin Double Shot (right) 

    Figure 1: MACCHIATObin Single Shot (left) and MACCHIATObin Double Shot (right) 

    The more streamlined MACCHIATObin Single Shot option presents an entry level board that should appeal to engineers with budgetary constraints. This has a much lower price tag than the original board, coming in at just $199. It comes with two 10G SFP+ connectors without the option of two 10G copper connectors, and also doesn’t come with default DDR4 DIMM as its predecessor, but still has a robust 1.6GHz processing speed. 

    This is complemented by the higher performance MACCHIATObin Double Shot. This unleashes the full 2GHz of processing capacity that can be derived from the ARMADA 8040, which relies on a 64-bit quad-core Arm Cortex-A72 processor core. 4GB of DDR4 DIMM is included. At only $399 it represents great value for money - costing only slightly more than the original, but with extra features and stronger operational capabilities being delivered. It comes with additional accessories that are not in the Single Shot package - including a power cable and a microUSB-to-USB cable. 

    Both the Single Shot and Double Shot versions incorporate heatsink and fan mechanisms in order to ensure that better reliability is maintained through more effective thermal management. The fan has an airflow of 6.7 cubic feet per minute (CFM) with low noise operation. A number of layout changes have been implemented upon the original design to better utilize the available space and to make the board more convenient for those using it. For example, the SD card slot has been moved to make it more accessible and likewise the SATA connectors are now better positioned, allowing easier connection of multiple cables. The micro USB socket has also been relocated to aid engineers. 

    A 3-pin UART header has been added to the console UART (working in parallel with FTDI USB-to-UART interface IC). This means that developers now have an additional connectivity option that they can utilize, making the MACCHIATObin community board more suitable for deployment in remote locations or where it needs to interface with legacy equipment (that do not have a USB port). The DIP switches have been replaced with jumpers, which again gives the boards greater versatility. The JTAG connector is not assembled by default, the PCI Express (PCIe) x4 slot has been replaced with an open PCIx4 slot so that it can accommodate a wider variety of different board options (like x8 and x16, as well as x4 PCIe) such as graphics processor cards, etc. to be connected. Furthermore, the fixed LED emitter has been replaced by one that is general purpose input/output (GPIO) controlled, thereby enabling operational activity to be indicated.

    The fact that these units have the same form factor as the original, means that they offer a like-for-like replacement for the previous model of the MACCHIATObin board. Therefore existing designs that are already using this board can be upgraded to the higher performance MACCHIATObin Double Shot version or conversely scaled down to the MACCHIATObin Single Shot in order to reduce the associated costs. 

    Together the MACCHIATObin Double Shot and Single Shot boards show that the team at Marvell are always listening to our customer base and responding to their needs. Learning from the first MACCHIATObin release, we have been able to make significant refinements, and consequently develop two new very distinct product offerings. One that addresses engineers that are working to a tight budget, for which the previous board would not have been viable, and the other for engineers that want to boost performance levels.        

  • January 11, 2018

    Storing the World’s Data

    By Marvell PR Team

    Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead. 

    Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too. 

    So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable - with cost and performance being important factors. 

    There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

    There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable. 

    Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

    Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies. 

    The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe. Marvell storing data Check out www.marvell.com to learn more about how Marvell is storing the world’s data.

  • January 11, 2018

    Ethernet Set to Bring About Radical Shift in How Automotive Networks are Implemented

    By By Christopher Mash, Senior Director of Automotive Applications & Architecture, Marvell

    The in-vehicle networks currently used in automobiles are based on a combination of several different data networking protocols, some of which have been in place for decades. There is the controller area network (CAN), which takes care of the powertrain and related functions; the local interconnect network (LIN), which is predominantly used for passenger/driver comfort purposes that are not time sensitive (such as climate control, ambient lighting, seat adjustment, etc.); the media oriented system transport (MOST), developed for infotainment; and FlexRay™ for anti-lock braking (ABS), electronic power steering (EPS) and vehicle stability functions. 

    As a result of using different protocols, gateways are needed to transfer data within the infrastructure. The resulting complexity is costly for car manufacturers. It also affects vehicle fuel economy, since the wire harnessing needed for each respective network adds extra weight to the vehicle. The wire harness represents the third heaviest element of the vehicle (after the engine and chassis) and the third most expensive, too. Furthermore, these gateways have latency issues, something that will impact safety-critical applications where rapid response is required. 

    The number of electronic control units (ECUs) incorporated into cars is continuously increasing, with luxury models now often having 150 or more ECUs, and even standard models are now approaching 80-90 ECUs. At the same time, data intensive applications are emerging to support advanced driver assistance system (ADAS) implementation, as we move toward greater levels of vehicle autonomy. All this is causing a significant ramp in data rates and overall bandwidth, with the increasing deployment of HD cameras and LiDAR technology on the horizon. 

    As a consequence, the entire approach in which in-vehicle networking is deployed needs to fundamentally change, first in terms of the topology used and, second, with regard to the underlying technology on which it relies. 

    Currently, the networking infrastructure found inside a car is a domain-based architecture. There are different domains for each key function - one for body control, one for infotainment, one for telematics, one for powertrain, and so on. Often these domains employ a mix of different network protocols (e.g., with CAN, LIN and others being involved). 

    As network complexity increases, it is now becoming clear to automotive engineers that this domain-based approach is becoming less and less efficient. Consequently, in the coming years, there will need to be a migration away from the current domain-based architecture to a zonal one.

     A zonal arrangement means data from different traditional domains is connected to the same ECU, based on the location (zone) of that ECU in the vehicle. This arrangement will greatly reduce the wire harnessing required, thereby lowering weight and cost - which in turn will translate into better fuel efficiency. Ethernet technology will be pivotal in moving to zonal-based, in-vehicle networks. 

    In addition to the high data rates that Ethernet technology can support, Ethernet adheres to the universally-recognized OSI communication model. Ethernet is a stable, long-established and well-understood technology that has already seen widespread deployment in the data communication and industrial automation sectors. Unlike other in-vehicle networking protocols, Ethernet has a well-defined development roadmap that is targeting additional speed grades, whereas protocols – like CAN, LIN and others – are already reaching a stage where applications are starting to exceed their capabilities, with no clear upgrade path to alleviate the problem. 

    Future expectations are that Ethernet will form the foundation upon which all data transfer around the car will occur, providing a common protocol stack that reduces the need for gateways between different protocols (along with the hardware costs and the accompanying software overhead). The result will be a single homogeneous network throughout the vehicle in which all the protocols and data formats are consistent. It will mean that the in-vehicle network will be scalable, allowing functions that require higher speeds (10G for example) and ultra-low latency to be attended to, while also addressing the needs of lower speed functions. Ethernet PHYs will be selected according to the particular application and bandwidth demands - whether it is a 1Gbps device for transporting imaging sensing data, or one for 10Mbps operation, as required for the new class of low data rate sensors that will be used in autonomous driving. 

    Each Ethernet switch in a zonal architecture will be able to carry data for all the different domain activities. All the different data domains would be connected to local switches and the Ethernet backbone would then aggregate the data, resulting in a more effective use of the available resources and allowing different speeds to be supported, as required, while using the same core protocols. This homogenous network will provide ‘any data, anywhere’ in the car, supporting new applications through combining data from different domains available through the network. 

    Marvell is leading the way when it comes to the progression of Ethernet-based, in-vehicle networking and zonal architectures by launching, back in the summer of 2017, the AEC-Q100-compliant 88Q5050 secure Gigabit Ethernet switch for use in automobiles. This device not only deals with OSI Layers 1-2 (the physical layer and data layer) functions associated with standard Ethernet implementations, it also has functions located at OSI Layers 3,4 and beyond (the network layer, transport layer and higher), such as deep packet inspection (DPI). This, in combination with Trusted Boot functionality, provides automotive network architects with key features vital in ensuring network security. Automotive network architects with key features

  • January 10, 2018

    Marvell Demonstrates Edge Computing by Extending Google Cloud to the Network Edge with Pixeom Edge Platform at CES 2018

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    The adoption of multi-gigabit networks and planned roll-out of next generation 5G networks will continue to create greater available network bandwidth as more and more computing and storage services get funneled to the cloud. Increasingly, applications running on IoT and mobile devices connected to the network are becoming more intelligent and compute-intensive. However, with so many resources being channeled to the cloud, there is strain on today’s networks. 

    Instead of following a conventional cloud centralized model, next generation architecture will require a much greater proportion of its intelligence to be distributed throughout the network infrastructure. High performance computing hardware (accompanied by the relevant software), will need to be located at the edge of the network. A distributed model of operation should provide the needed compute and security functionality required for edge devices, enable compelling real-time services and overcome inherent latency issues for applications like automotive, virtual reality and industrial computing. With these applications, analytics of high resolution video and audio content is also needed. 

    Through use of its high performance ARMADA® embedded processors, Marvell is able to demonstrate a highly effective solution that will facilitate edge computing implementation on the Marvell MACCHIATObin™ community board using the ARMADA 8040 system on chip (SoC). At CES® 2018, Marvell and Pixeom teams will be demonstrating a fully effective, but not costly, edge computing system using the Marvell MACCHIATObin community board in conjunction with the Pixeom Edge Platform to extend functionality of Google Cloud Platform™ services at the edge of the network. The Marvell MACCHIATObin community board will run Pixeom Edge Platform software that is able to extend the cloud capabilities by orchestrating and running Docker container-based micro-services on the Marvell MACCHIATObin community board. 

    Currently, the transmission of data-heavy, high resolution video content to the cloud for analysis purposes places a lot of strain on network infrastructure, proving to be both resource-intensive and also expensive. Using Marvell’s MACCHIATObin hardware as a basis, Pixeom will demonstrate its container-based edge computing solution which provides video analytics capabilities at the network edge. This unique combination of hardware and software provides a highly optimized and straightforward way to enable more processing and storage resources to be situated at the edge of the network. The technology can significantly increase operational efficiency levels and reduce latency. 

    The Marvell and Pixeom demonstration deploys Google TensorFlow™ micro-services at the network edge to enable a variety of different key functions, including object detection, facial recognition, text reading (for name badges, license plates, etc.) and intelligent notifications (for security/safety alerts). This technology encompasses the full scope of potential applications, covering everything from video surveillance and autonomous vehicles, right through to smart retail and artificial intelligence. Pixeom offers a complete edge computing solution, enabling cloud service providers to package, deploy, and orchestrate containerized applications at scale, running on premise “Edge IoT Cores.” To accelerate development, Cores come with built-in machine learning, FaaS, data processing, messaging, API management, analytics, offloading capabilities to Google Cloud, and more. Pixeom The MACCHIATObin community board is using Marvell’s ARMADA 8040 processor and has a 64-bit ARMv8 quad-core processor core (running at up to 2.0GHZ), and supports up to 16GB of DDR4 memory and a wide array of different I/Os. Through use of Linux® on the Marvell MACCHIATObin board, the multifaceted Pixeom Edge IoT platform can facilitate implementation of edge computing servers (or cloudlets) at the periphery of the cloud network. Marvell will be able to show the power of this popular hardware platform to run advanced machine learning, data processing, and IoT functions as part of Pixeom’s demo. The role-based access features of the Pixeom Edge IoT platform also mean that developers situated in different locations can collaborate with one another in order to create compelling edge computing implementations. Pixeom supplies all the edge computing support needed to allow Marvell embedded processors users to establish their own edge-based applications, thus offloading operations from the center of the network. ARMADA-8040 Marvell will also be demonstrating the compatibility of its technology with the Google Cloud platform, which enables the management and analysis of deployed edge computing resources at scale. Here, once again the MACCHIATObin board provides the hardware foundation needed by engineers, supplying them with all the processing, memory and connectivity required. 

    Those visiting Marvell’s suite at CES (Venetian, Level 3 - Murano 3304, 9th-12th January 2018, Las Vegas) will be able to see a series of different demonstrations of the MACCHIATObin community board running cloud workloads at the network edge. Make sure you come by!

  • January 10, 2018

    Moving the World’s Data

    By Marvell, PR Team

    The way in which data is moved via wireline and wireless connectivity is going through major transformations. The dynamics that are causing these changes are being seen across a broad cross section of different sectors. 

    Within our cars, the new features and functionality that are being incorporated mean that the traditional CAN and LIN based communication technology is no longer adequate. More advanced in-vehicle networking needs to be implemented which is capable of supporting multi-Gigabit data rates, in order to cope with the large quantities of data that high resolution cameras, more sophisticated infotainment, automotive radar and LiDAR will produce. With CAN, LIN and other automotive networking technologies not offering viable upgrade paths, it is clear that Ethernet will be the basis of future in-vehicle network infrastructure - offering the headroom needed as automobile design progresses towards the long term goal of fully autonomous vehicles. Marvell is already proving itself to be ahead of the game here, following the announcement of the industry’s first secure automotive gigabit Ethernet switch, which delivers the speeds now being required by today’s data-heavy automotive designs, while also ensuring secure operation is maintained and the threat of hacking or denial of service (DoS) attacks is mitigated. 

    Within the context of modern factories and processing facilities, the arrival of Industry 4.0 will allow greater levels of automation, through use of machine-to-machine (M2M) communication. This communication can enable the access of data — data that is provided by a multitude of different sensor nodes distributed throughout the site. The ongoing in-depth analysis of this data is designed to ultimately bring improvements in efficiency and productivity for the modern factory environment. Ethernet capable of supporting Gigabit data rates has shown itself to be the prime candidate and it is already experiencing extensive implementation. Not only will this meet the speed and bandwidth requirements needed, but it also has the robustness that is mandatory in such settings (dealing with high temperatures, ESD strikes, exposure to vibrations, etc.) and the low latency characteristics that are essential for real-time control/analysis. Marvell has developed highly sophisticated Gigabit Ethernet transceivers with elevated performance that are targeted at such applications. 

    Within data centers things are changing too, but in this case the criteria involved are somewhat different. Here it is more about how to deal with the large volumes of data involved, while keeping the associated capital and operational expenses in check. Marvell has been championing a more cost effective and streamlined approach through its Prestera® PX Passive Intelligent Port Extender (PIPE) products. These present data center engineers with a modular approach to deploy network infrastructure that meets their specific requirements, rather than having to add further layers of complexity unnecessarily that will only serve to raise the cost and the power consumption. The result is a fully scalable, more economical and energy efficient solution. 

    In the wireless domain, there is ever greater pressure being placed upon WLAN hardware - in the home, office, municipal and retail environments. As well as increasing user densities and overall data capacity to contend with, network operators and service providers need to be able to address alterations that are now occurring in user behavior too. Wi-Fi connectivity is no longer just about downloading data, increasingly it will be the uploading of data that will be an important consideration. This will be needed for a range of different applications including augmented reality gaming, the sharing of HD video content and cloud-based creative activities. In order to address this, Wi-Fi technology will need to exhibit enhanced bandwidth capabilities on its uplink as well as its downlink. 

    The introduction of the much anticipated 802.11ax protocol is set to radically change how Wi-Fi is implemented. Not only will this allow far greater user densities to be supported (thereby meeting the coverage demands of places where large numbers of people are in need of Internet access, such as airports, sports stadia and concert venues), it also offers greater uplink/downlink data capacity - supporting multi-Gigabit operation in both directions. Marvell is looking to drive things forward via its portfolio of recently unveiled multi-Gigabit 802.11ax Wi-Fi system-on-chips (SoCs), which are the first in the industry to have orthogonal frequency-division multiple access (OFDMA) and multi-user MIMO operation on both the downlink and the uplink.  Check out www.marvell.com to learn more about how Marvell is moving the world’s data.

  • January 09, 2018

    Processing the World’s Data

    By Marvell, PR Team

    The data requirements of modern society are escalating at a relentless pace with new paradigms changing the way data is processed. The rapidly rising volume of data that is now being uploaded and downloaded from the cloud (such as HD video or equally data-intensive immersive gaming content) is putting incredible strain onto existing network infrastructure - testing both the bandwidth and data density speeds that are supported. 

    The onset of augmented reality (AR) and virtual reality (VR) will require access to considerable processing power, but at the same time mandate extremely low latency levels, to prevent lag effects. The widespread roll-out of IoT infrastructure, connected cars, robotics and industrial automation systems, to name a few, will also have uncompromising processing and latency demands that are simply not in line with current network architectures. 

    Transporting data from the network edge back to centralized servers (and vice versa) takes time, and hence adds an unacceptable level of latency to certain applications. All this will mean that fundamental changes need to be made. Rather than having all the processing resources located at the center of the network, a more distributed model is going to be needed in the future. Though the role of centralized servers will unquestionably still be important, this will be complemented by remote servers that are located at the edge of the network - thus making them closer to the users themselves, and thereby mitigating latency issues which is critical for time-sensitive data. 

    The figures on this speak for themselves. It is estimated that by 2020, approximately 45% of fog computing-generated data will be stored, processed, analyzed and subsequently acted upon either close to or at the edge of the network. Running in tandem with this, data centers will look to start utilizing in-storage processing. Here, in order to alleviate CPU congestion levels and mitigate network latency, data processing resources are going to start being placed closer to the storage drive. This, as a result, will dispense with the need to continuously transfer large quantities of data to and from storage reserves so that it can be processed, with processing tasks instead taking place inside the storage controller. 

    The transition from traditional data centers to edge-based computing, along with the onset of in-storage processing, will call for a new breed of processor devices. In addition to delivering the operational performance that high throughput, low latency applications will require, these devices will also need to meet the power, cost and space constraints that are going to characterize edge deployment. 

    Through the highly advanced portfolio of ARMADA® Arm-based multi-core embedded processors, Marvell has been able to supply the industry with processing solutions that can help engineers in facing the challenges that have just been outlined. These ICs combine high levels of integration, elevated performance and low power operation. Using ARMADA as a basis, the company has worked with technology partners to co-develop the MACCHIATObin™ and ESPRESSObin® community boards. The Marvell community boards, which each use 64-bit ARMADA processors, bring together a high-performance single-board computing platform and open source software for developers and designers working with a host of networking, storage and connectivity applications. They give users both the raw processing capabilities and the extensive array of connectivity options needed to develop proprietary edge computing applications from the ground up. 

    Incorporating a total of 6 MACCHIATObin boards plus a Marvell high density Prestera DX 14 port, 10 Gigabit Ethernet switch IC, the NFV PicoPod from PicoCluster is another prime example of ARMADA technology in action. This ultra-compact unit provides engineers with a highly cost effective and energy efficient platform upon which they can implement their own virtualized network applications. Fully compliant with the OPNFV Pharos specification, it opens up the benefits of NFV technology to a much broader cross section of potential customers, allowing everyone from the engineering teams in large enterprises all the way down to engineers who are working solo to rapidly develop, verify and deploy virtual network functions (VNFs) - effectively providing them with their own ‘datacenter on desktop’. 

    The combination of Marvell IoT enterprise edge gateway technology with the Google Cloud IoT Core platform is another way via which greater intelligence is being placed at the network periphery. The upshot of this will be that the estimated tens of billions of connected IoT nodes that will be installed over the course of the coming years can be managed in the most operationally efficient manner, offloading much of the workload from the core network’s processing capabilities and only utilizing them when it is completely necessary. Marvell is moving the world's data Check out www.marvell.com to learn more about how Marvell is processing the world’s data.  

  • December 13, 2017

    The Marvell NVMe DRAM-less SSD Controller Proves Victorious at the 2017 ACE Awards

    By Sander Arts, Interim VP of Marketing, Marvell

    ACE Awards logo Key representatives of the global technology sector were gathered together at the San Jose Convention Center last week to hear the recipients of this year’s Annual Creativity in Electronics (ACE) Awards announced. This prestigious awards event, which is organized in conjunction with leading electronics engineering magazines EDN and EE Times, highlights the most innovative products announced in the last 12 months, as well as recognizing visionary executives and the most promising new start-ups. A panel made up of the editorial teams of these magazines, plus several highly respected independent judges, were all involved in the process of selecting the winner in each category. 88NV1160 controller for non-volatile memory express The 88NV1160 high performance controller for non-volatile memory express (NVMe), which was introduced by Marvell earlier this year, fought off tough competition from companies like Diodes Inc. and Cypress Semiconductor to win the coveted Logic/Interface/Memory category. Marvell gained two further nominations at the awards - with 98PX1012 Prestera PX Passive Intelligent Port Extender (PIPE) also being featured in the Logic/Interface/Memory category, while the 88W8987xA automotive wireless combo SoC was among those cited in the Automotive category. 

    Designed for inclusion in the next generation of streamlined portable computing devices (such as high-end tablets and ultra-books), the 88NV1160 NVMe solid-state drive (SSD) controllers are able to deliver 1600MB/s read speeds while simultaneously keeping the power consumption required for such operations extremely low (<1.2W). Based on a 28nm low power CMOS process, each of these controller ICs has a dual core 400MHz Arm® Cortex®-R5 processor embedded into it. 

    Through incorporation of a host memory buffer, the 88NV1160 exhibits far lower latency than competing devices. It is this that is responsible for accelerating the read speeds supported. By utilizing its embedded SRAM, the controller does not need to rely on an external DRAM memory - thereby simplifying the memory controller implementation. As a result, there is a significant reduction in the board space required, as well as a lowering of the overall bill-of-materials costs involved. 

    The 88NV1160’s proprietary NANDEdge™ low density parity check error-correction functionality raises SSD endurance and makes sure that long term system reliability is upheld throughout the end product’s entire operational lifespan. The controller’s built-in 256-bit AES encryption engine ensures that stored metadata is safeguarded from potential security breaches. Furthermore, these DRAM-less ICs are very compact, thus enabling multiple-chip package integration to be benefitted from. 

    Consumers are now expecting their portable electronics equipment to possess a lot more computing resource, so that they can access the exciting array of new software apps that are now becoming available; making use of cloud-based services, enjoying augmented reality and gaming. At the same time as offering functions of this kind, such items of equipment need to be able to support longer periods between battery recharges, so as to further enhance the user experience derived. This calls for advanced ICs combining strong processing capabilities with improved power efficiency levels and that is where the 88NV1160 comes in.ACE 2017 Award "We're excited to honor this robust group for their dedication to their craft and efforts in bettering the industry for years to come," said Nina Brown, Vice President of Events at UBM Americas. "The judging panel was given the difficult task of selecting winners from an incredibly talented group of finalists and we'd like to thank all of those participants for their amazing work and also honor their achievements. These awards aim to shine a light on the best in today's electronics realm and this group is the perfect example of excellence within both an important and complex industry."  

  • November 28, 2017

    Keeping it Real: Innovative New Product Based on Marvell ESPRESSObin Platform Enables Physical Ports to be Added to Modern Virtual Networks

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    A number of emerging companies that serve the networking and data storage sectors are increasingly using Marvell’s popular community board – the Marvell ESPRESSObin® platform – in their product offerings. ZeroTier Edge is the latest appliance to be added to what is an ever growing list of such product offerings. 

    With this new product, Irvine-based start-up ZeroTier is looking to make the wide area network (WAN) much more local. According to ZeroTier, by using ZeroTier Edge, it is possible to create secure and robust LANs that can connect with a broad array of different devices across multiple locations. This means that a greater scope of equipment will now be able to gain access to virtual network infrastructure as it continues to be rolled out, without the associated software element needing to be installed.  This feature overcomes current obstacles that are holding back more widespread use of such connectivity. For example, in relation to some legacy equipment (office peripherals, building automation systems, surveillance cameras, industrial control mechanisms, etc.), installing this software simply isn’t an option, or in other cases (like where a large number of computers are involved), it is just impractical. Furthermore, using ZeroTier Edge mitigates the serious security issues that installing software onto a multitude of connected devices could potentially raise. 

    Relying on Marvell’s ARMADA® system-on-chip (SoC) technology and open source software, the ZeroTier Edge is a compact and highly versatile unit that can be located on a desktop and addresses a plethora of software-defined networking applications. This unit delivers enterprise-grade VPN, SD-WAN and network virtualization functionality. 

    ZeroTier Edge basically acts as a pre-configured layer 2 bridge that provides the physical ports (both wired and wireless) needed to enable hardware (like the examples set forth above) to connect with virtualized networks. Its ease of use means that this unit can even be installed by non-IT staff. As a result, ZeroTier is able to offer enterprise customers a unique plug-and-play solution such that they can get the full benefit of software-defined networking without needing to implement the complex and costly bridging arrangements that would otherwise be required. 

    Each ZeroTier Edge unit incorporates a Marvell ESPRESSObin single board computing platform that has been purpose built for supporting open source development activity of this kind within the networking space. The board features a high performance ARMADA 3700 dual core 64-bit ARM®-based processor that is capable of running at speeds of 1.2GHz. This IC allows the ZeroTier Edge to deal with up to 1Gbps of incoming/outgoing encrypted data traffic. 

    Through the Marvell ESPRESSObin board, ZeroTier Edge can also take advantage of extensive I/O capabilities, with 3x Gigabit Ethernet ports, a USB 3.0 SuperSpeed interface, plus dual band 802.11ac Wi-Fi®, SATA (for connection to network data storage resources) and mini PCIe.  1GByte of on-board DRAM memory and 4GBytes of flash memory are supported, too, with provision for attaching additional memory capacity using the SD card slot. There are also ample GPIO pins available. 

    Thanks to the Marvell ESPRESSObin board’s ability to provide strong operational performance at an attractive price point, implementing ZeroTier Edge into customers’ networks doesn’t require a heavy investment. The product is currently going through the crowdfunding process and has already gained over 90% of its target figure. The initial units are expected to start shipping in early 2018. 

    For more information on ZeroTier Edge and the opportunity to support the project, visit: https://www.indiegogo.com/projects/zerotier-edge-open-source-enterprise-vpn-sd-wan#/

  • November 08, 2017

    Redefining the Connected Home

    By Sree Durbha, Head of Smart-Connected Business, Marvell

    The concept of a fully ‘connected home’ has been discussed for more than 20 years. However, widespread proliferation has taken far longer than anyone could have originally imagined. For a long time, deployment activity seemed to be limited to a relatively small number of high value installations. These installations were generally complicated to implement and their operation was not very user-friendly. Most importantly, they were composed of an amalgamation of isolated subsystems from different suppliers rather than a single universal system. 

    Even as home automation started to become accessible from smartphones and tablets, market fragmentation meant that each aspect of the automation technology installed within a home was still based on its own proprietary mechanism that needed a separate app to control it. As a result, home automation systems have often proven inconvenient and frustrating for those operating them and has unquestionably held back their adoption by consumers. The industry fragmentation and lack of interoperability between different vendor ecosystems meant that the consumer couldn’t really take advantage of the connected capabilities of all the various platforms. 

    The industry is innovating with solutions that seem finally likely to help broaden the appeal of home automation and accelerate its future progression. Through its HomeKit™ technology, Apple is looking to consolidate all the various verticals under a single, comprehensive home automation ecosystem that works together easily and securely. The HomeKit Accessory Protocol (HAP) is enabling hardware from different suppliers involved in home automation to communicate with Apple products (iPhone, iPad, Apple Watch) via a single, consistent, complete platform. This is done via wireless technologies like Bluetooth® Low Energy technology, as well as IP connectivity. The list of different ‘behaviors’ covered by the HomeKit hardware and software technology is extensive. Selecting a playlist for the audio system, turning on the lights in a particular room, remotely starting up home appliances (such as a washer/dryer), adjusting the heating and cooling, and activating the door entry system are just a few examples. But, because all of these functions are controlled via the Apple Home app or by asking Siri (rather than multiple apps), they can now work in tandem. For instance, settings can be configured so that if the curtains in a room were drawn, then the lighting would simultaneously turn on, or the ambient lighting could be changed to fit a certain music playlist. 

    Marvell is placing itself at the forefront of next generation smart home development through its support of Apple HomeKit. Our family of wireless SoC devices was the first in the industry to secure certification for the original HAP specification three years ago and has consistently been at the forefront as evidenced with our latest HomeKit Accessory Protocol Release 9 (HAP R9) specification. The low power 88MW30x ICs each possess an integrated microcontroller with Cortex®-M4 processing core, plus single-band IEEE 802.11n Wi-Fi® functionality. The truly transformational change this time is our SoCs’ certification for iCloud implementation, which enables remote control of HomeKit compliant devices using voice as well as the HomeKit App using iCloud® remote access. This means that OEMs serving the home automation market will be able to make their systems much more streamlined and convenient to seamlessly implement through iCloud. As a result, new use cases are now possible. For example, you can remotely start your thermostat to heat or cool your home using the Apple Home app (or Siri® voice control) while you are still on your way home from work and have the right temperature set for when you arrive. 

    This technology is showcased in the Marvell® EZ-Connect® HAP software development kit (SDK), which is designed to facilitate the implementation of HomeKit-enabled home automation accessories - accelerating our OEM customers’ design cycles and allowing products to be brought to market more quickly. Complementing its 802.11n wireless connectivity, the incorporated bridging functionality also allows interfacing with equipment using other RF protocols like Bluetooth low energy technology. For example, Marvell has partnered with a leading Bluetooth low energy vendor to offer a combo module reference design that is commercially available today through one of our module vendor partners, Azurewave. Our emphasis on security, encryption and memory partitioning allows secure, over-the-air firmware upgrades so that customer applications can run securely from external Flash memory while being encrypted on the fly. Our SDK also supports Amazon’s popular AWS cloud platform and Google’s Weave/Cloud as alternatives. To accompany the SDK, Marvell intends to provide OEMs with all the collateral necessary to get their products through the HomeKit certification process as rapidly and painlessly as possible and into the market quickly. Useful project examples are also provided. 

    Marvell understands how crucially important a robust software solution is to enable a hassle free home automation user experience and has developed industry leading software capabilities in support of Apple HomeKit. This has allowed us to get ahead of the game.

  • November 06, 2017

    The USR-Alliance – Enabling an Open Multi-Chip Module (MCM) Ecosystem

    By Gidi Navon, Senior Principal Architect, Marvell

    USRA logo The semiconductor industry is witnessing exponential growth and rapid changes to its bandwidth requirements, as well as increasing design complexity, emergence of new processes and integration of multi-disciplinary technologies. All this is happening against a backdrop of shorter development cycles and fierce competition. Other technology-driven industry sectors, such as software and hardware, are addressing similar challenges by creating open alliances and open standards. This blog does not attempt to list all the open alliances that now exist --  the Open Compute Project, Open Data Path and the Linux Foundation are just a few of the most prominent examples. One technological area that still hasn’t embraced such open collaboration is Multi-Chip-Module (MCM), where multiple semiconductor dies are packaged together, thereby creating a combined system in a single package. 

    The MCM concept has been around for a while, generating multiple technological and market benefits, including:

    • Improved yield - Instead of creating large monolithic dies with low yield and higher cost (which sometimes cannot even be fabricated), splitting the silicon into multiple die can significantly improve the yield of each building block and the combined solution. Better yield consequently translates into reductions in costs.
    • Optimized process - The final MCM product is a mix-and-match of units in different fabrication processes which enables optimizing of the process selection for specific IP blocks with similar characteristics.
    • Multiple fabrication plants - Different fabs, each with its own unique capabilities, can be utilized to create a given product.
    • Product variety - New products are easily created by combining different numbers and types of devices to form innovative and cost‑optimized MCMs.
    • Short product cycle time - Dies can be upgraded independently, which promotes ease in the addition of new product capabilities and/or the ability to correct any issues within a given die. For example, integrating a new type of I/O interface can be achieved without having to re-spin other parts of the solution that are stable and don’t require any change (thus avoiding waste of time and money).
    • Economy of scale - Each die can be reused in multiple applications and products, increasing its volume and yield as well as the overall return on the initial investment made in its development.

    Sub-dividing large semiconductor devices and mounting them on an MCM has now become the new printed circuit board (PCB) - providing smaller footprint, lower power, higher performance and expanded functionality. 

    Now, imagine that the benefits listed above are not confined to a single chip vendor, but instead are shared across the industry as a whole. By opening and standardizing the interface between dies, it is possible to introduce a true open platform, wherein design teams in different companies, each specializing in different technological areas, are able to create a variety of new products beyond the scope of any single company in isolation. 

    This is where the USR Alliance comes into action. The alliance has defined an Ultra Short Reach (USR) link, optimized for communication across the very short distances between the components contained in a single package. This link provides high bandwidth with less power and smaller die size than existing very short reach (VSR) PHYs which cross package boundaries and connectors and need to deal with challenges that simply don’t exist inside a package. The USR PHY is based on a multi-wire differential signaling technique optimized for MCM environments. 

    There are many applications in which the USR link can be implemented. Examples include CPUs, switches and routers, FPGAs, DSPs, analog components and a variety of long reach electrical and optical interfaces. Example of a possible MCM layout Figure 1: Example of a possible MCM layout 

    Marvell is an active promoter member of the USR Alliance and is working to create an ecosystem of interoperable components, interconnects, protocols and software that will help the semiconductor industry bring more value to the market.  The alliance is working on creating PHY, MAC and software standards and interoperability agreements in collaboration with the industry and other standards development organizations, and is promoting the development of a full ecosystem around USR applications (including certification programs) to ensure widespread interoperability. 

    To learn more about the USR Alliance visit: www.usr-alliance.org

  • October 26, 2017

    Marvell Demonstrates Powerful Security Software & Implementation Support at OpenWrt Summit via Collaboration with Sentinel & Sartura

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    Thanks to its collaboration with leading players in the OpenWrt and security space, Marvell will be able to show those attending the OpenWrt Summit (Prague, Czech Republic, 26-27th October) new beneficial developments with regard to its Marvell ARMADA® multi-core processors. In collaboration with contributors Sartura and Sentinel, these developments will be demonstrated on Marvell’s portfolio of networking community boards that support the 64-bit Arm® based Marvell ARMADA processor devices, by running the increasingly popular and highly versatile OpenWrt operating system, plus the latest advances in security software. We expect these new offerings will assist engineers in mitigating the major challenges they face when constructing next-generation customer-premises equipment (CPE) and uCPE platforms. 

    On display at the event at both the Sentinel and Sartura booths will be examples of the Marvell MACCHIATObin™ board (with a quad-core ARMADA 8040 that can deliver up to 2GHz operation) and the Marvell ESPRESSObin™ board (with a dual-core ARMADA 3700 lower power processor running at 1.2GHz).

    The boards located at the Sartura booth will demonstrate the open source OpenWrt offering of the Marvell MACCHIATObin/ESPRESSObin platforms and will show how engineers can benefit from this company’s OpenWrt integration capabilities. The capabilities have proven invaluable in helping engineers expedite their development projects more quickly and allow the full realization of initial goals set for such projects. The Sartura team can take engineers’ original CPE designs incorporating ARMADA and provide production level software needed for inclusion in end products. 

    Marvell will also have MACCHIATObin/ESPRESSObin boards demonstrated at the Sentinel booth. These will feature highly optimized security software. Using this security software, companies looking to employ ARMADA based hardware in their designs will be able to ensure that they have ample protection against the threat posed by malware and harmful files - like WannaCry and Nyetya ransomware, as well as Petya malware, etc. This protection relies upon Sentinel’s File Validation Service (FVS), which inspects all HTTP, POP and IMAP files as they pass through the device toward the client. Any files deemed to be malicious are then blocked. This security technology is very well suited to CPE networking infrastructure and edge computing, as well as IoT deployments. Sentinel’s FVS technology can also be implemented on vCPE/uCPE as a security virtual network function (VNF), in addition to native implementation over physical CPEs - providing similar protection levels due to its extremely lightweight architecture and very low latency. FVS is responsible for identifying download requests and subsequently analyzing the data being downloaded. This software package can run on all Linux-based embedded operating systems for CPE and NFV devices which meet minimum hardware requirements and offer the necessary features. 

    Through collaborations such as those described above, Marvell is building an extensive ecosystem around its ARMADA products. As a result, Marvell will be able to support future development of secure, high performance CPE and uCPE/vCPE systems that exhibit much greater differentiation.

  • October 20, 2017

    Long-Term Prospects for Ethernet in the Automotive Sector

    By Tim Lau, Senior Director Automotive Product Management, Marvell

    The automobile is encountering possibly the biggest changes in its technological progression since the invention of the internal combustion engine nearly 150 years ago. Increasing levels of autonomy will reshape how we think about cars and car travel. It won't be just a matter of getting from point A to point B while doing very little else -- we will be able to keep on doing what we want while in the process of getting there. 

    As it is, the modern car already incorporates large quantities of complex electronics - making sure the ride is comfortable, the engine runs smoothly and efficiently, and providing infotainment for the driver and passengers. In addition, the features and functionality being incorporated into vehicles we are now starting to buy are no longer of a fixed nature. It is increasingly common for engine control and infotainment systems to require updates over the course of the vehicle's operational lifespan. 

    Such an update is the one issue that proved instrumental in first bringing Ethernet connectivity into the vehicle domain. Leading automotive brands, such as BMW and VW, found they could dramatically increase the speed of uploads performed by mechanics at service centers by installing small Ethernet networks into the chassis of their vehicle models instead of trying to use the established, but much slower, Controller Area Network (CAN) bus. As a result, transfer times were cut from hours to minutes. 

    As an increasing number of upgradeable Electronic Control Units (ECUs) have appeared (thereby putting greater strain on existing in-vehicle networking technology), the Ethernet network has itself expanded. In response, the semiconductor industry has developed solutions that have made the networking standard, which was initially developed for the relatively electrically clean environment of the office, much more robust and suitable for the stringent requirements of automobile manufacturers. The CAN and Media Oriented Systems Transport (MOST) buses have persisted as the main carriers of real-time information for in-vehicle electronics - although, now, they are beginning to fade as Ethernet evolves into a role as the primary network inside the car, being used for both real-time communications and updating tasks. 

    In an environment where implementation of weight savings are crucial to improving fuel economy, the ability to have communications run over a single network (especially one that needs just a pair of relatively light copper cables) is a huge operational advantage. In addition, a small connector footprint is vital in the context of increasing deployment of sensors (such as cameras, radar and LiDAR transceivers), which are now being mounted all around the car for driver assistance/semi-autonomous driving purposes. This is supported by the adoption of unshielded, twisted-pair cabling. 

    Image sensing, radar and LiDAR functions will all produce copious amounts of data. So data-transfer capacity is going to be a critical element of in-vehicle Ethernet networks, now and into the future. The industry has responded quickly by first delivering 100 Mbit/s transceivers and following up with more capacious standards-compliant 1000 Mbit/s offerings. 

    But providing more bandwidth is simply not enough on its own. So that car manufacturers do not need to sacrifice the real-time behavior necessary for reliable control, the relevant international standards committees have developed protocols to guarantee the timely delivery of data. Time Sensitive Networking (TNS) provides applications with the ability to use reserved bandwidth on virtual channels in order to ensure delivery within a predictable timeframe. Less important traffic can make use of the best-effort service of conventional Ethernet with the remaining unreserved bandwidth. 

    The industry’s more forward-thinking semiconductor vendors, Marvell among them, have further enhanced real-time performance with features such as Deep Packet Inspection (DPI), employing Ternary Content-Addressable Memory (TCAM), in their automotive-optimized Ethernet switches. The DPI mechanism makes it possible for hardware to look deep into each packet as it arrives at a switch input and instantly decide exactly how the message should be handled. The packet inspection supports real-time debugging processes by trapping messages of a certain type, and markedly reduces application latency experienced within the deployment by avoiding processor intervention. 

    Support from remote management frames is another significant protocol innovation in automotive Ethernet. These frames make it possible for a system controller to control the switch state directly. For example, a system controller can automatically power down I/O ports when they are not needed - a feature that preserves precious battery life. 

    The result of these adaptations to the core Ethernet standard, as well as the increased resilience it now delivers, is the emergence of an expansive feature set that is well positioned for the ongoing transformation of the car, taking it from just being a mode of transportation into the data-rich, autonomous mobile platform it is envisaged to become in the future.    

  • October 19, 2017

    Celebrating 20 Years of Wi-Fi - Part III

    By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

    Standardized in 1997, Wi-Fi has changed the way that we compute. Today, almost every one of us uses a Wi-Fi connection on a daily basis, whether it's for watching a show on a tablet at home, using our laptops at work, or even transferring photos from a camera. Millions of Wi-Fi-enabled products are being shipped each week, and it seems this technology is constantly finding its way into new device categories. 

    Since its humble beginnings, Wi-Fi has progressed at a rapid pace. While the initial standard allowed for just 2 Mbit/s data rates, today's Wi-Fi implementations allow for speeds in the order of Gigabits to be supported. This last in our three part blog series covering the history of Wi-Fi will look at what is next for the wireless standard. 

    Gigabit Wireless


    The latest 802.11 wireless technology to be adopted at scale is 802.11ac. It extends 802.11n, enabling improvements specifically in the 5.8 GHz band, with 802.11n technology used in the 2.4 GHz band for backwards compatibility. 

    By sticking to the 5.8 GHz band, 802.11ac is able to benefit from a huge 160 Hz channel bandwidth which would be impossible in the already crowded 2.4 GHz band. In addition, beamforming and support for up to 8 MIMO streams raises the speeds that can be supported. Depending on configuration, data rates can range from a minimum of 433 Mbit/s to multiple Gigabits in cases where both the router and the end-user device have multiple antennas. 

    If that's not fast enough, the even more cutting edge 802.11ad standard (which is now starting to appear on the market) uses 60 GHz ‘millimeter wave’ frequencies to achieve data rates up to 7 Gbit/s, even without MIMO propagation. The major catch with this is that at 60 GHz frequencies, wireless range and penetration are greatly reduced. 

    Looking Ahead


    Now that we've achieved Gigabit speeds, what's next? Besides high speeds, the IEEE 802.11 working group has recognized that low speed, power efficient communication is in fact also an area with a great deal of potential for growth. While Wi-Fi has traditionally been a relatively power-hungry standard, the upcoming protocols will have attributes that will allow it to target areas like the Internet of Things (IoT) market with much more energy efficient communication. 

    20 Years and Counting


    Although it has been around for two whole decades as a standard, Wi-Fi has managed to constantly evolve and keep up with the times. From the dial-up era to broadband adoption, to smartphones and now as we enter the early stages of IoT, Wi-Fi has kept on developing new technologies to adapt to the needs of the market. If history can be used to give us any indication, then it seems certain that Wi-Fi will remain with us for many years to come.

  • October 17, 2017

    Unleashing the Potential of Flash Storage with NVMe

    By Jeroen Dorgelo, Director of Strategy, Storage Group, Marvell

    The dirty little secret of flash drives today is that many of them are running on yesterday’s interfaces. While SATA and SAS have undergone several iterations since they were first introduced, they are still based on decades-old concepts and were initially designed with rotating disks in mind. These legacy protocols are bottlenecking the potential speeds possible from today’s SSDs. 

    NVMe is the latest storage interface standard designed specifically for SSDs. With its massively parallel architecture, it enables the full performance capabilities of today’s SSDs to be realized. Because of price and compatibility, NVMe has taken a while to see uptake, but now it is finally coming into its own. 

    Serial Attached Legacy 

    Currently, SATA is the most common storage interface. Whether a hard drive or increasingly common flash storage, chances are it is running through a SATA interface. The latest generation of SATA - SATA III – has a 600 MB/s bandwidth limit. While this is adequate for day-to-day consumer applications, it is not enough for enterprise servers. Even I/O intensive consumer use cases, such as video editing, can run into this limit. 

    The SATA standard was originally released in 2000 as a serial-based successor to the older PATA standard, a parallel interface. SATA uses the advanced host controller interface (AHCI) which has a single command queue with a depth of 32 commands. This command queuing architecture is well-suited to conventional rotating disk storage, though more limiting when used with flash. 

    Whereas SATA is the standard storage interface for consumer drives, SAS is much more common in the enterprise world. Released originally in 2004, SAS is also a serial replacement to an older parallel standard SCSI. Designed for enterprise applications, SAS storage is usually more expensive to implement than SATA, but it has significant advantages over SATA for data center use - such as longer cable lengths, multipath IO, and better error reporting. SAS also has a higher bandwidth limit of 1200MB/s. 

    Just like SATA, SAS, has a single command queue, although the queue depth of SAS goes to 254 commands instead of 32 commands. While the larger command queue and higher bandwidth limit make it better performing than SATA, SAS is still far from being the ideal flash interface. 

    NVMe - Massive Parallelism 

    Introduced in 2011, NVMe was designed from the ground up for addressing the needs of flash storage. Developed by a consortium of storage companies, its key objective is specifically to overcome the bottlenecks on flash performance imposed by SATA and SAS. 

    Whereas SATA is restricted to 600MB/s and SAS to 1200MB/s (as mentioned above), NVMe runs over the PCIe bus and its bandwidth is theoretically limited only by the PCIe bus speed. With current PCIe standards providing 1GB/s or more per lane, and PCIe connections generally offering multiple lanes, bus speed almost never represents a bottleneck for NVMe-based SSDs. 

    NVMe is designed to deliver massive parallelism, offering 64,000 command queues, each with a queue depth of 64,000 commands. This parallelism fits in well with the random access nature of flash storage, as well as the multi-core, multi-threaded processors in today’s computers. NVMe’s protocol is streamlined, with an optimized command set that does more in fewer operations compared to AHCI. IO operations often need fewer commands than with SATA or SAS, allowing latency to be reduced. For enterprise customers, NVMe also supports many enterprise storage features, such as multi-path IO and robust error reporting and management. 

    Pure speed and low latency, plus the ability to deal with high IOPs have made NVMe SSDs a hit in enterprise data centers. Companies that particularly value low latency and high IOPs, such as high-frequency trading firms and  database and web application hosting companies, have been some of the first and most avid endorsers of NVMe SSDs. 

    Barriers to Adoption 

    While NVMe is high performance, historically speaking it has also been considered relatively high cost. This cost has negatively affected its popularity in the consumer-class storage sector. Relatively few operating systems supported NVMe when it first came out, and its high price made it less attractive for ordinary consumers, many of whom could not fully take advantage of its faster speeds anyway. 

    However, all this is changing. NVMe prices are coming down and, in some cases, achieving price parity with SATA drives. This is due not only to market forces but also to new innovations, such as DRAM-less NVMe SSDs. 

    As DRAM is a significant bill of materials (BoM) cost for SSDs, DRAM-less SSDs are able to achieve lower, more attractive price points. Since NVMe 1.2, host memory buffer (HMB) support has allowed DRAM-less SSDs to borrow host system memory as the SSD’s DRAM buffer for better performance. DRAM-less SSDs that take advantage of HMB support can achieve performance similar to that of DRAM-based SSDs, while simultaneously saving cost, space and energy. 

    NVMe SSDs are also more power-efficient than ever. While the NVMe protocol itself is already efficient, the PCIe link it runs over can consume significant levels of idle power. Newer NVMe SSDs support highly efficient, autonomous sleep state transitions, which allow them to achieve energy consumption on par or lower than SATA SSDs. 

    All this means that NVMe is more viable than ever for a variety of use cases, from large data centers that can save on capital expenditures due to lower cost SSDs and operating expenditures as a result of lower power consumption, as well as power-sensitive mobile/portable applications such as laptops, tablets and smartphones, which can now consider using NVMe. 

    Addressing the Need for Speed 

    While the need for speed is well recognized in enterprise applications, is the speed offered by NVMe actually needed in the consumer world? For anyone who has ever installed more memory, bought a larger hard drive (or SSD), or ordered a faster Internet connection, the answer is obvious. 

    Today’s consumer use cases generally do not yet test the limits of SATA drives, and part of the reason is most likely because SATA is still the most common interface for consumer storage. Today’s video recording and editing, gaming and file server applications are already pushing the limits of consumer SSDs, and tomorrow’s use cases are only destined to push them further. With NVMe now achieving price points that are comparable with SATA, there is no reason not to build future-proof storage today.

  • October 11, 2017

    Bringing IoT intelligence to the enterprise edge by supporting Google Cloud IoT Core Public Beta on ESPRESSObin and MACCHIATObin community platforms

    By Aviad Enav Zagha, Sr. Director Embedded Processors Product Line Manager, Networking Group, Marvell

    Espresso Bin and Macchiato Bin Cards Though the projections made by market analysts still differ to a considerable degree, there is little doubt about the huge future potential that implementation of Internet of Things (IoT) technology has within an enterprise context. It is destined to lead to billions of connected devices being in operation, all sending captured data back to the cloud, from which analysis can be undertaken or actions initiated. This will make existing business/industrial/metrology processes more streamlined and allow a variety of new services to be delivered. 

    With large numbers of IoT devices to deal with in any given enterprise network, the challenges of efficiently and economically managing them all without any latency issues, and ensuring that elevated levels of security are upheld, are going to prove daunting. In order to put the least possible strain on cloud-based resources, we believe the best approach is to divest some intelligence outside the core and place it at the enterprise edge, rather than following a purely centralized model. This arrangement places computing functionality much nearer to where the data is being acquired and makes a response to it considerably easier. IoT devices will then have a local edge hub that can reduce the overhead of real-time communication over the network. Rather than relying on cloud servers far away from the connected devices to take care of the ‘heavy lifting’, these activities can be done closer to home. Deterministic operation is maintained due to lower latency, bandwidth is conserved (thus saving money), and the likelihood of data corruption or security breaches is dramatically reduced. 

    Sensors and data collectors in the enterprise, industrial and smart city segments are expected to generate more than 1GB per day of information, some needing a response within a matter of seconds. Therefore, in order for the network to accommodate the large amount of data, computing functionalities will migrate from the cloud to the network edge, forming a new market of edge computing. 

    In order to accelerate the widespread propagation of IoT technology within the enterprise environment, Marvell now supports the multifaceted Google Cloud IoT Core platform. Cloud IoT Core is a fully managed service mechanism through which the management and secure connection of devices can be accomplished on the large scales that will be characteristic of most IoT deployments. 

    Through its IoT enterprise edge gateway technology, Marvell is able to provide the necessary networking and compute capabilities required (as well as the prospect of localized storage) to act as mediator between the connected devices within the network and the related cloud functions. By providing the control element needed, as well as collecting real-time data from IoT devices, the IoT enterprise gateway technology serves as a key consolidation point for interfacing with the cloud and also has the ability to temporarily control managed devices if an event occurs that makes cloud services unavailable. In addition, the IoT enterprise gateway can perform the role of a proxy manager for lightweight, rudimentary IoT devices that (in order to keep power consumption and unit cost down) may not possess any intelligence. Through the introduction of advanced ARM®-based community platforms, Marvell is able to facilitate enterprise implementations using Cloud IoT Core. The recently announced Marvell MACCHIATObin™ and Marvell ESPRESSObin™ community boards support open source applications, local storage and networking facilities. At the heart of each of these boards is Marvell’s high performance ARMADA® system-on-chip (SoC) that supports Google Cloud IoT Core Public Beta. 

    Via Cloud IoT Core, along with other related Google Cloud services (including Pub/Sub, Dataflow, Bigtable, BigQuery, Data Studio), enterprises can benefit from an all-encompassing IoT solution that addresses the collection, processing, evaluation and visualization of real-time data in a highly efficient manner. Cloud IoT Core features certificate-based authentication and transport layer security (TLS), plus an array of sophisticated analytical functions. 

    Over time, the enterprise edge is going to become more intelligent. Consequently, mediation between IoT devices and the cloud will be needed, as will cost-effective processing and management. With the combination of Marvell’s proprietary IoT gateway technology and Google Cloud IoT Core, it is now possible to migrate a portion of network intelligence to the enterprise edge, leading to various major operational advantages. 

    Please visit MACCHIATObin Wiki and ESPRESSObin Wiki for instructions on how to connect to Google’s Cloud IoT Core Public Beta platform.

  • October 10, 2017

    Celebrating 20 Years of Wi-Fi - Part II

    By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

    This is the second instalment in a series of blogs covering the history of Wi-Fi®. While the first part looked at the origins of Wi-Fi, this part will look at how the technology has progressed to the high speed connection we know today. 

    Wireless Revolution 

    By the early years of the new millennium, Wi-Fi quickly had started to gain widespread popularity, as the benefits of wireless connectivity became clear. Hotspots began popping up at coffee shops, airports and hotels as businesses and consumers started to realize the potential for Wi-Fi to enable early forms of what we now know as mobile computing. Home users, many of whom were starting to get broadband Internet, were able to easily share their connections throughout the house. 

    Thanks to the IEEE® 802.11 working group's efforts, a proprietary wireless protocol that was originally designed simply for connecting cash registers (see previous blog) had become the basis for a wireless networking standard that was changing the whole fabric of society. 

    Improving Speeds 

    The advent of 802.11b, in 1999, set the stage for Wi-Fi mass adoption. Its cheaper price point made it accessible for consumers, and its 11 Mbit/s speeds made it fast enough to replace wired Ethernet connections for enterprise users. Driven by the broadband internet explosion in the early years post 2000, 802.11b became a great success. Both consumers and businesses found wireless was a great way to easily share the newfound high speed connections that DSL, cable and other broadband technologies gave them. 

    As broadband speeds became the norm, consumer's computer usage habits changed accordingly. Higher bandwidth applications such as music/movie sharing and streaming audio started to see increasing popularity within the consumer space. 

    Meanwhile, in the enterprise market, wireless had even greater speed demands to contend with, as it was competing with fast local networking over Ethernet. Business use cases (such as VoIP, file sharing and printer sharing, as well as desktop virtualization) needed to work seamlessly if wireless was to be adopted. 

    Even in the early 2000's, the speed that 802.11b could support was far from cutting edge. On the wired side of things, 10/100 Ethernet was already a widespread standard. At 100 Mbit/s, it was almost 10 times faster than 802.11b's nominal 11 Mbit/s speed. 802.11b's protocol overhead meant that, in fact, maximum theoretical speeds were 5.9 Mbit/s. In practice though, as 802.11b used the increasingly popular 2.4 GHz band, speeds proved to be lower than that still. Interference from microwave ovens, cordless phones and other consumer electronics, meant that real world speeds often didn't reach the 5.9 Mbit/s mark (sometimes not even close). 

    802.11g

    To address speed concerns, in 2003 the IEEE 802.11 working group came out with 802.11g. Though 802.11g would use the 2.4 GHz frequency band just like 802.11b, it was able to achieve speeds of up to 54 Mbit/s. Even after speed decreases due to protocol overhead, its theoretical maximum of 31.4 Mbit/s was enough bandwidth to accommodate increasingly fast household broadband speeds. 

    Actually 802.11g was not the first 802.11 wireless standard to achieve 54 Mbit/s. That crown goes to 802.11a, which had done it back in 1999. However, 802.11a used a separate 5.8 GHz frequency to achieve its fast speeds. While 5.8 GHz had the benefit of less radio interference from consumer electronics, it also meant incompatibility with 802.11b. That fact, along with more expensive equipment, meant that 802.11a was only ever popular within the business market segment and never saw proliferation into the higher volume domestic/consumer arena. 

    By using 2.4 GHz to reach 54 Mbit/s, 802.11g was able to achieve high speeds while retaining full backwards compatibility with 802.11b. This was crucial, as 802.11b had already established itself as the main wireless standard for consumer devices by this point. Its backwards compatibility, along with cheaper hardware compared to 802.11a, were big selling points, and 802.11g soon became the new, faster wireless standard for consumer and, increasingly, even business related applications. 

    802.11n 

    Introduced in 2009, 802.11n made further speed improvements upon 802.11g and 802.11a. Operating on either 2.4 GHz or 5.8 GHz frequency bands (though not simultaneously), 802.11n improved transfer efficiency through frame aggregation, and also introduced optional MIMO and 40 Hz channels - double the channel width of 802.11g. 

    802.11n offered significantly faster network speeds. At the low end, if it was operating in the same type of single antenna, 20 Hz channel width configuration as an 802.11g network, the 802.11n network could achieve 72 Mbit/s. If, in addition, the double width 40 Hz channel was used, with multiple antennas, then data rates could be much faster - up to 600 Mbit/s (for a four antenna configuration). 

    The third and final blog in this series will take us right up to the modern day and will also look at the potential of Wi-Fi in the future.  

  • October 03, 2017

    Celebrating 20 Years of Wi-Fi - Part I

    By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

    You can't see it, touch it, or hear it - yet Wi-Fi® has had a tremendous impact on the modern world - and will continue to do so. From our home wireless networks, to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives - using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

    Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE®. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow. 

    Unlicensed Beginnings 

    While we now think of 802.11 wireless technology as predominantly connecting our personal computing devices and smartphones to the Internet, it was in fact initially invented as a means to connect up humble cash registers. In the late 1980s, NCR Corporation, a maker of retail hardware and point-of-sale (PoS) computer systems, had a big problem. Its customers - department stores and supermarkets - didn't want to dig up their floors each time they changed their store layout. 

    A recent ruling that had been made by the FCC, which opened up certain frequency bands as free to use, inspired what would be a game-changing idea. By using wireless connections in the unlicensed spectrum (rather than conventional wireline connections), electronic cash registers and PoS systems could be easily moved around a store without the retailer having to perform major renovation work. 

    Soon after this, NCR allocated the project to an engineering team out of its Netherlands office. They were set the challenge of creating a wireless communication protocol. These engineers succeeded in developing ‘WaveLAN’, which would be recognized as the precursor to Wi-Fi. Rather than preserving this as a purely proprietary protocol, NCR could see that by establishing it as a standard, the company would be able to position itself as a leader in the wireless connectivity market as it emerged. By 1990, the IEEE 802.11 working group had been formed, based on wireless communication in unlicensed spectra. 

    Using what were at the time innovative spread spectrum techniques to reduce interference and improve signal integrity in noisy environments, the original incarnation of Wi-Fi was finally formally standardized in 1997. It operated with a throughput of just 2 Mbits/s, but it set the foundations of what was to come. 

    Wireless Ethernet 

    Though the 802.11 wireless standard was released in 1997, it didn't take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while - but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b. This used the same 2.4 GHz ISM frequency band as the original 802.11 wireless standard, but it raised the throughput supported considerably, reaching 11 Mbits/s. Wireless Ethernet was finally a reality. 

    Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s. 

    Because it occupied the 5 GHz frequency band, away from the popular (and thus congested) 2.4 GHz band, it had better performance in noisy environments; however, the higher carrier frequency also meant it had reduced range compared to 2.4 GHz wireless connectivity. Thanks to cheaper equipment and better nominal ranges, 802.11b proved to be the most popular wireless standard by far. But, while it was more cost effective than 802.11a, 802.11b still wasn't at a low enough price bracket for the average consumer. Routers and network adapters would still cost hundreds of dollars. 

    That all changed following a phone call from Steve Jobs. Apple was launching a new line of computers at that time and wanted to make wireless networking functionality part of it. The terms set were tough - Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed. 

    While it was a difficult pill to swallow initially, the Apple deal finally put Wi-Fi in the hands of consumers and pushed it into the mainstream. PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi. 

    Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system. Users could get wirelessly connected without having to install third party drivers or software. With the release of Windows XP, Wi-Fi was now natively supported on millions of computers worldwide - it had officially made it into the ‘big time’. 

    This blog post is the first in a series that charts the eventful history of Wi-Fi. The second part, which is coming soon, will bring things up to date and look at current Wi-Fi implementations.  

  • September 18, 2017

    Modular Networks Drive Cost Efficiencies in Data Center Upgrades

    By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

    Exponential growth in data center usage has been responsible for driving a huge amount of investment in the networking infrastructure used to connect virtualized servers to the multiple services they now need to accommodate. To support the server-to-server traffic that virtualized data centers require, the networking spine will generally rely on high capacity 40 Gbit/s and 100 Gbit/s switch fabrics with aggregate throughputs now hitting 12.8 Tbit/s. But the ‘one size fits all’ approach being employed to develop these switch fabrics quickly leads to a costly misalignment for data center owners. They need to find ways to match the interfaces on individual storage units and server blades that have already been installed with the switches they are buying to support their scale-out plans. 

    The top-of-rack (ToR) switch provides one way to match the demands of the server equipment and the network infrastructure. The switch can aggregate the data from lower speed network interfaces and so act as a front-end to the core network fabric. But such switches tend to be far more complex than is actually needed - often derived from older generations of core switch fabric. They perform a level of switching that is unnecessary and, as a result, are not cost effective when they are primarily aggregating traffic on its way to the core network’s 12.8 Tbits/s switching engines. The heightened expense manifests itself not only in terms of hardware complexity and the issues of managing an extra network tier, but also in relation to power and air-conditioning. It is not unusual to find five or more fans inside each unit being used to cool the silicon switch. There is another way to support the requirements of data center operators which consumes far less power and money, while also offering greater modularity and flexibility too. 

    Providing a means by which to overcome the high power and cost associated with traditional ToR switch designs, the IEEE 802.1BR standard for port extenders makes it possible to implement a bridge between a core network interface and a number of port extenders that break out connections to individual edge devices. An attractive feature of this standard is the ability to allow port extenders to be cascaded, for even greater levels of modularity. As a result, many lower speed ports, of 1 Gbit/s and 10 Gbits/s, can be served by one core network port (supporting 40 Gbits/s or 100 Gbits/s operation) through a single controlling bridge device. 

    With a simpler, more modular approach, the passive intelligent port extender (PIPE) architecture that has been developed by Marvell leads to next generation rack units which no longer call for the inclusion of any fans for thermal management purposes. Reference designs have already been built that use a simple 65W open-frame power supply to feed all the devices required even in a high-capacity, 48-ports of 10 Gbits/s. Furthermore, the equipment dispenses with the need for external management. The management requirements can move to the core 12.8 Tbit/s switch fabric, providing further savings in terms of operational expenditure. It is a demonstration of exactly how a more modular approach can greatly improve the efficiency of today's and tomorrow's data center implementations.

  • August 31, 2017

    Securing Embedded Storage with Hardware Encryption

    By Jeroen Dorgelo, Director of Strategy, Storage Group, Marvell

    For industrial, military and a multitude of modern business applications, data security is of course incredibly important. While software based encryption often works well for consumer and some enterprise environments, in the context of the embedded systems used in industrial and military applications, something that is of a simpler nature and is intrinsically more robust is usually going to be needed. 

    Self encrypting drives utilize on-board cryptographic processors to secure data at the drive level. This not only increases drive security automatically, but does so transparently to the user and host operating system. By automatically encrypting data in the background, they thus provide the simple to use, resilient data security that is required by embedded systems. 

    Embedded vs Enterprise Data Security 

    Both embedded and enterprise storage often require strong data security. Depending on the industry sectors involved this is often related to the securing of customer (or possibly patient) privacy, military data or business data. However that is where the similarities end. Embedded storage is often used in completely different ways from enterprise storage, thereby leading to distinctly different approaches to how data security is addressed. 

    Enterprise storage usually consists of racks of networked disk arrays in a data center, while embedded storage is often simply a solid state drive (SSD) installed into an embedded computer or device. The physical security of the data center can be controlled by the enterprise, and software access control to enterprise networks (or applications) is also usually implemented. Embedded devices, on the other hand - such as tablets, industrial computers, smartphones, or medical devices - are often used in the field, in what are comparatively unsecure environments. Data security in this context has no choice but to be implemented down at the device level. 

    Hardware Based Full Disk Encryption 

    For embedded applications where access control is far from guaranteed, it is all about securing the data as automatically and transparently as possible. 

    Full disk, hardware based encryption has shown itself to be the best way of achieving this goal. Full disk encryption (FDE) achieves high degrees of both security and transparency by encrypting everything on a drive automatically. Whereas file based encryption requires users to choose files or folders to encrypt, and also calls for them to provide passwords or keys to decrypt them, FDE works completely transparently. All data written to the drive is encrypted, yet, once authenticated, a user can access the drive as easily as an unencrypted one. This not only makes FDE much easier to use, but also means that it is a more reliable method of encryption, as all data is automatically secured. Files that the user forgets to encrypt or doesn’t have access to (such as hidden files, temporary files and swap space) are all nonetheless automatically secured. 

    While FDE can be achieved through software techniques, hardware based FDE performs better, and is inherently more secure. Hardware based FDE is implemented at the drive level, in the form of a self encrypting SSD. The SSD controller contains a hardware cryptographic engine, and also stores private keys on the drive itself. 

    Because software based FDE relies on the host processor to perform encryption, it is usually slower - whereas hardware based FDE has much lower overhead as it can take advantage of the drive’s integrated crypto-processor. Hardware based FDE is also able to encrypt the master boot record of the drive, which conversely software based encryption is unable to do. 

    Hardware centric FDEs are transparent to not only the user, but also the host operating system. They work transparently in the background and no special software is needed to run them. Besides helping to maximize ease of use, this also means sensitive encryption keys are kept separate from the host operating system and memory, as all private keys are stored on the drive itself. 

    Improving Data Security 

    Besides providing the transparent, easy to use encryption that is now being sought, hardware- based FDE also has specific benefits for data security in modern SSDs. NAND cells have a finite service life and modern SSDs use advanced wear leveling algorithms to extend this as much as possible. Instead of overwriting the NAND cells as data is updated, write operations are constantly moved around a drive, often resulting in multiple copies of a piece of data being spread across an SSD as a file is updated. This wear leveling technique is extremely effective, but it makes file based encryption and data erasure much more difficult to accomplish, as there are now multiple copies of data to encrypt or erase. 

    FDE solves both these encryption and erasure issues for SSDs. Since all data is encrypted, there are not any concerns about the presence of unencrypted data remnants. In addition, since the encryption method used (which is generally 256-bit AES) is extremely secure, erasing the drive is as simple to do as erasing the private keys. 

    Solving Embedded Data Security 

    Embedded devices often present considerable security challenges to IT departments, as these devices are often used in uncontrolled environments, possibly by unauthorized personnel. Whereas enterprise IT has the authority to implement enterprise wide data security policies and access control, it is usually much harder to implement these techniques for embedded devices situated in industrial environments or used out in the field. 

    The simple solution for data security in embedded applications of this kind is hardware based FDE. Self encrypting drives with hardware crypto-processors have minimal processing overhead and operate completely in the background, transparent to both users and host operating systems. Their ease of use also translates into improved security, as administrators do not need to rely on users to implement security policies, and private keys are never exposed to software or operating systems.

  • August 02, 2017

    Wireless Technology Set to Enable an Automotive Revolution

    By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group, Marvell

    The automotive industry has always been a keen user of wireless technology. In the early 1980s, Renault made it possible to lock and unlock the doors on its Fuego model utilizing a radio transmitter. Within a decade, other vehicle manufacturers embraced the idea of remote key-less entry and not long after that it became a standard feature. Now, wireless technology is about to reshape the world of driving. 

    The first key-less entry systems were based on infra-red (IR) signals, borrowing the technique from automatic garage door openers. But the industry swiftly moved to RF technology, in order to make it easier to use. Although each manufacturer favored its own protocol and coding system, they adopted standard low-power RF frequency bands, such as 315 MHz in the US and 433 MHz in Europe. As concerns about theft emerged, they incorporated encryption and other security features to fend off potential attacks. They have further refreshed this technology as new threats appeared, as well as adding features such as proximity detection to remove the need to even press the key-fob remote's button. 

    The next stage in favor of convenience was to employ Bluetooth instead of custom radios on the sub-1GHz frequency band so as to dispense with the keyfob altogether. With Bluetooth, an app on the user's smartphone can not only unlock the car doors, but also handle tasks such as starting the heater or air-conditioning to make the vehicle comfortable ready for when the driver and passengers actually get in. 

    Bluetooth itself has become a key feature on many models over the past decade as automobile manufacturers have looked to open up their infotainment systems. Access to the functions located on dashboard through Bluetooth has made it possible for vehicle occupants to hook up their phone handsets easily. Initially, it was to support legal phone calls through hands-free operation without forcing the owner to buy and install a permanent phone in the vehicle itself. But the wireless connection is just as good at relaying high-quality audio so that the passengers can listen to their favorite music (stored on portable devices). We have clearly move a long way from the CD auto-changer located in the trunk. Bluetooth is a prime example of the way in which RF technology, once in place, can support many different applications - with plenty of potential for use cases that have not yet been considered. Through use of a suitable relay device in the car, Bluetooth also provides the means by which to send vehicle diagnostics information to relevant smartphone apps. The use of the technology for diagnostics gateway points to an emerging use for Bluetooth in improving the overall safety of car transportation. 

    But now Wi-Fi is also primed to become as ubiquitous in vehicles as Bluetooth. Wi-Fi is able to provide a more robust data pipe, thus enabling even richer applications and a tighter integration with smartphone handsets. One use case that seems destined to change the cockpit experience for users is the emergence of screen projection technologies. Through the introduction of such mechanisms it will be possible to create a seamless transition for drivers from their smartphones to their cars. This will not necessarily even need to be their own car, it could be any car that they may rent from anywhere in the world. 

    One of the key enabling technologies for self-driving vehicles is communication. This can encompass vehicle-to-vehicle (V2V) links, vehicle-to-infrastructure (V2I) messages and, through technologies such as Bluetooth and Wi-Fi, vehicle-to-anything (V2X). 

    V2V provides the ability for vehicles on the road to signal their intentions to others and warn of hazards ahead. If a pothole opens up or cars have to break suddenly to avoid an obstacle, they can send out wireless messages to nearby vehicles to let them know about the situation. Those other vehicles can then slow down or change lane accordingly. 

    The key enabling technology for V2V is a form of the IEEE 802.11 Wi-Fi protocol, re-engineered for much lower latency and better reliability. IEEE 802.11p Wireless Access in Vehicular Environments (WAVE) operates in the 5.9 GHz region of the RF spectrum, and is capable of supporting data rates of up to 27 Mbit/s. One of the key additions for transportation is scheduling feature that let vehicles share access to wireless channels based on time. Each vehicle uses the Coordinated Universal Time (UTC) reading, usually provided by the GPS receiver, to help ensure all nearby transceivers are synchronised to the same schedule. 

    A key challenge for any transceiver is the Doppler Effect. On a freeway, the relative velocity of an approaching transmitter can exceed 150 mph. Such a transmitter may be in range for only a few seconds at most, making ultra-low latency crucial. But, with the underlying RF technology for V2V in place, advanced navigation applications can be deployed relatively easily and extended to deal with many other objects and even people. 

    V2I transactions make it possible for roadside controllers to update vehicles on their status. Traffic signals, for example, can let vehicles know when they are likely to change state. Vehicles leaving the junction can relay that data to approaching cars, which may slow down in response. By slowing down, they avoid the need to stop at a red signal - and thereby cross just as it is turning to green. The overall effect is a significant saving in fuel, as well as less wear and tear on the brakes. In the future, such wireless-enabled signals will make it possible improve the flow of autonomous vehicles considerably. The traffic signals will monitor the junction to check whether conditions are safe and usher the autonomous vehicle through to the other side, while other road users without the same level of computer control are held at a stop. 

    Although many V2X applications were conceived for use with a dedicated RF protocol, such as WAVE for example, there is a place for Bluetooth and, potentially, other wireless standards like conventional Wi-Fi. Pedestrians and cyclists may signal their presence on the road with the help of their own Bluetooth devices. The messages picked up by passing vehicles can be relayed using V2V communications over WAVE to extend the range of the warnings. Roadside beacons using Bluetooth technology can pass on information about local points of interest - and this can be provide to passengers who can subsequently look up more details on the Internet using the vehicle's built-in Wi-Fi hotspot. 

    One thing seems to be clear, the world of automotive design will be a heterogeneous RF environment that takes traditional Wi-Fi technology and brings it together with WAVE, Bluetooth and GPS. It clearly makes sense to incorporate the right set of radios together onto one single chipset, which will thereby ease the integration process, and also ensure optimal performance is achieved. This will not only be beneficial in terms of the design of new vehicles, but will also facilitate the introduction of aftermarket V2X modules. In this way, existing cars will be able to participate in the emerging information-rich superhighway.

  • August 02, 2017

    Connectivity Will Drive the Cars of the Future

    By Avinash Ghirnikar, Director of Technical Marketing of Connectivity Business Group, Marvell

    The growth of electronics content inside the automobile has already had a dramatic effect on the way in which vehicle models are designed and built. As a direct consequence of this, the biggest technical change is now beginning to happen – one that overturns the traditional relationship between the car manufacturer and the car owner. 

    With many subsystems now controlled by microprocessors running software, it is now possible to alter the behavior of the vehicle with an update and introduce completely new features and functionality by merely updating software. The high profile Tesla brand of high performance electric vehicles has been one of the companies pioneering this approach by releasing software and firmware updates that give existing models the ability to drive themselves. Instead of buying a car with a specific, fixed set of features, vehicles are being upgraded via firmware over the air (FOTA) without the need to visit a dealership.

    Faced with so many electronic subsystems now in the vehicle, high data rates are essential. Without the ability to download and program devices quickly, the car could potentially become unusable for hours at a time. On the wireless side, this is requiring 802.11ac Wi-Fi speeds and very soon this will be ramped up to 802.11ax speeds that can potentially exceed Gigabit/second data rates. 

    Automotive Ethernet that can support Gigabit speeds is also now being fitted so that updates can be delivered as fast as possible to the many electronic control units (ECUs) around the car. The same Ethernet backbone is proving just as essential for day-to-day use. The network provides high resolution, real-time data from cameras, LiDAR, radar, tire pressure monitors and various other sensors fitted around the body, each of which is likely to have their own dedicated microprocessor. The result is a high performance computer based on distributed intelligence. And this, in turn, can tap into the distributed intelligence now being deployed in the cloud. 

    The beauty of distributed intelligence is that it is an architecture that can support applications that in many cases have not even been thought of yet. The same wireless communication networks that provide the over-the-air updates can relay real-time information on traffic patterns in the vicinity, weather data, disruptions due to accidents and many other pieces of data that the onboard computers can then use to plan the journey and make it safer. This rapid shift towards high speed intra- and inter-vehicle connectivity, and the vehicle-to-anything (V2X) communication capabilities that have thus resulted will enable applications to be benefitted from that would have been considered pure fantasy just a few years ago, 

    The V2X connectivity can stop traffic lights from being an apparent obstacle and turn them into devices that provide the vehicle with hints to save fuel. If the lights send out signals on their stop-go cycle approaching vehicles can use them to determine whether it is better to decelerate and arrive just in time for them to turn green instead of braking all the way to a stop. Sensors at the junction can also warn of hazards that the car then flags up to the driver. When the vehicle is able to run autonomously, it can take care of such actions itself. Similarly, cars can report to each other when they are planning to change lanes in order to leave the freeway, or when they see a slow-moving vehicle ahead and need to decelerate. The result is considerably smoother braking patterns that avoid the logjam effect we so often see on today's crowded roads. The enablement of such applications will require multiple radios in the vehicle, which will need to work cooperatively in a fail-safe manner. 

    Such connectivity will also give OEMs unprecedented access to real-time diagnostic data, which a car could be uploading opportunistically to the cloud for analysis purposes. This will provide information that could lead to customized maintenance services that could be planned in advance, thereby cutting down diagnostic time at the workshop and meaning that technical problems are preemptively dealt with, rather than waiting for them to become more serious over time. 

    There is no need for automobile manufacturers to build any of these features into their vehicle models today. As many computations can be offloaded to servers in the cloud, the key to unlocking advanced functionality is not wholly dependent on what is present in the car itself. The fundamental requirement is access to an effective means of communications, and that is available right now through high speed Ethernet within the vehicle plus Wi-Fi and V2X-compatible wireless for transfers going beyond the chassis. Both can be supplied so that they are compliant with the AEC-Q100 automotive standard - thus ensuring quality and reliability. With those tools in place, we don't need to see all the way ahead to the future. We just know we have the capability to get there.

  • July 17, 2017

    Rightsizing Ethernet

    By George Hervey, Principal Architect, Marvell

    Implementation of cloud infrastructure is occurring at a phenomenal rate, outpacing Moore's Law. Annual growth is believed to be 30x and as much 100x in some cases. In order to keep up, cloud data centers are having to scale out massively, with hundreds, or even thousands of servers becoming a common sight. 

    At this scale, networking becomes a serious challenge. More and more switches are required, thereby increasing capital costs, as well as management complexity. To tackle the rising expense issues, network disaggregation has become an increasingly popular approach. By separating the switch hardware from the software that runs on it, vendor lock-in is reduced or even eliminated. OEM hardware could be used with software developed in-house, or from third party vendors, so that cost savings can be realized. 

    Though network disaggregation has tackled the immediate problem of hefty capital expenditures, it must be recognized that operating expenditures are still high. The number of managed switches basically stays the same. To reduce operating costs, the issue of network complexity has to also be tackled. 

    Network Disaggregation 

    Almost every application we use today, whether at home or in the work environment, connects to the cloud in some way. Our email providers, mobile apps, company websites, virtualized desktops and servers, all run on servers in the cloud. 

    For these cloud service providers, this incredible growth has been both a blessing and a challenge. As demand increases, Moore's law has struggled to keep up. Scaling data centers today involves scaling out - buying more compute and storage capacity, and subsequently investing in the networking to connect it all. The cost and complexity of managing everything can quickly add up. 

    Until recently, networking hardware and software had often been tied together. Buying a switch, router or firewall from one vendor would require you to run their software on it as well. Larger cloud service providers saw an opportunity. These players often had no shortage of skilled software engineers. At the massive scales they ran at, they found that buying commodity networking hardware and then running their own software on it would save them a great deal in terms of Capex. 

    This disaggregation of the software from the hardware may have been financially attractive, however it did nothing to address the complexity of the network infrastructure. There was still a great deal of room to optimize further. 

    802.1BR 

    Today's cloud data centers rely on a layered architecture, often in a fat-tree or leaf-spine structural arrangement. Rows of racks, each with top-of-rack (ToR) switches, are then connected to upstream switches on the network spine. The ToR switches are, in fact, performing simple aggregation of network traffic. Using relatively complex, energy consuming switches for this task results in a significant capital expense, as well as management costs and no shortage of headaches. 

    Through the port extension approach, outlined within the IEEE 802.1BR standard, the aim has been to streamline this architecture. By replacing ToR switches with port extenders, port connectivity is extended directly from the rack to the upstream. Management is consolidated to the fewer number of switches which are located at the upper layer network spine, eliminating the dozens or possibly hundreds of switches at the rack level. 

    The reduction in switch management complexity of the port extender approach has been widely recognized, and various network switches on the market now comply with the 802.1BR standard. However, not all the benefits of this standard have actually been realized. 

    The Next Step in Network Disaggregation 

    Though many of the port extenders on the market today fulfill 802.1BR functionality, they do so using legacy components. Instead of being optimized for 802.1BR itself, they rely on traditional switches. This, as a consequence impacts upon the potential cost and power benefits that the new architecture offers. 

    Designed from the ground up for 802.1BR, Marvell's Passive Intelligent Port Extender (PIPE) offering is specifically optimized for this architecture. PIPE is interoperable with 802.1BR compliant upstream bridge switches from all the industry’s leading OEMs. It enables fan-less, cost efficient port extenders to be deployed, which thereby provide upfront savings as well as ongoing operational savings for cloud data centers. Power consumption is lowered and switch management complexity is reduced by an order of magnitude 

    The first wave in network disaggregation was separating switch software from the hardware that it ran on. 802.1BR's port extender architecture is bringing about the second wave, where ports are decoupled from the switches which manage them. The modular approach to networking discussed here will result in lower costs, reduced energy consumption and greatly simplified network management.

  • July 07, 2017

    Extending the Lifecycle of 3.2T Switch-Based Architecture

    By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell and Yaniv Kopelman, Networking and Connectivity CTO, Marvell

    The growth witnessed in the expanse of data centers has been completely unprecedented. This has been driven by the exponential increases in cloud computing and cloud storage demand that is now being witnessed. While Gigabit switches proved more than sufficient just a few years ago, today, even 3.2 Terabit (3.2T) switches, which currently serve as the fundamental building blocks upon which data center infrastructure is constructed, are being pushed to their full capacity. 

    While network demands have increased, Moore's law (which effectively defines the semiconductor industry) has not been able to keep up. Instead of scaling at the silicon level, data centers have had to scale out. This has come at a cost though, with ever increasing capital, operational expenditure and greater latency all resulting. Facing this challenging environment, a different approach is going to have to be taken. In order to accommodate current expectations economically, while still also having the capacity for future growth, data centers (as we will see) need to move towards a modularized approach. switching-blogpost Scaling out the datacenter 

    Data centers are destined to have to contend with demands for substantially heightened network capacity - as a greater number of services, plus more data storage, start migrating to the cloud. This increase in network capacity, in turn, results in demand for more silicon to support it. 

    To meet increasing networking capacity, data centers are buying ever more powerful Top-of-Rack (ToR) leaf switches. In turn these are consuming more power - which impacts on the overall power budget and means that less power is available for the data center servers. Not only does this lead to power being unnecessarily wasted, in addition it will push the associated thermal management costs and the overall Opex upwards. As these data centers scale out to meet demand, they're often having to add more complex hierarchical structures to their architecture as well - thereby increasing latencies for both north-south and east-west traffic in the process. 

    The price of silicon per gate is not going down either. We used to enjoy cost reductions as process sizes decreased from 90 nm, to 65 nm, to 40 nm. That is no longer strictly true however. As we see process sizes go down from 28 nm node sizes, yields are decreasing and prices are consequently going up. To address the problems of cloud-scale data centers, traditional methods will not be applicable. Instead, we need to take a modularized approach to networking. 

    PIPEs and Bridges 

    Today's data centers often run on a multi-tiered leaf and spine hierarchy. Racks with ToR switches connect to the network spine switches. These, in turn, connect to core switches, which subsequently connect to the Internet. Both the spine and the top of the rack layer elements contain full, managed switches. 

    By following a modularized approach, it is possible to remove the ToR switches and replace them with simple IO devices - port extenders specifically. This effectively extends the IO ports of the spine switch all the way down to the ToR. What results is a passive ToR that is unmanaged. It simply passes the packets to the spine switch. Furthermore, by taking a whole layer out of the management hierarchy, the network becomes flatter and is thus considerably easier to manage. 

    The spine switch now acts as the controlling bridge. It is able to manage the layer which was previously taken care of by the ToR switch. This means that, through such an arrangement, it is possible to disaggregate the IO ports of the network that were previously located at the ToR switch, from the logic at the spine switch which manages them. This innovative modularized approach is being facilitated by the increasing number of Port Extenders and Control Bridges now being made available from Marvell that are compatible with the IEEE 802.1BR bridge port extension standard. 

    Solving Data Center Scaling Challenges 

    The modularized port-extender and control bridge approach allows data centers to address the full length and breadth of scaling challenges. Port extenders solve the latency by flattening the hierarchy. Instead of having conventional ‘leaf’ and ‘spine’ tiers, the port extender acts to simply extend the IO ports of the spine switch to the ToR. Each server in the rack has a near-direct connection to the managing switch. This improves latency for north-south bound traffic. 

    The port extender also functions to aggregate traffic from 10 Gbit Ethernet ports into higher throughput outputs, allowing for terabit switches which only have 25, 40, or 100 Gbit Ethernet ports, to communicate directly with 10 Gbit Ethernet edge devices. The passive port extender is a greatly simplified device compared to a managed switch. This means lower up-front costs as well as lower power consumption and a simpler network management scheme are all derived. Rather than dealing with both leaf and spine switches, network administration simply needs to focus on the managed switches at the spine layer. 

    With no end in sight to the ongoing progression of network capacity, cloud-scale data centers will always have ever-increasing scaling challenges to attend to. The modularized approach described here makes those challenges solvable.

  • June 21, 2017

    Making Better Use of Legacy Infrastructure

    By Ron Cates

    The flexibility offered by wireless networking is revolutionizing the enterprise space. High-speed Wi-Fi®, provided by standards such as IEEE 802.11ac and 802.11ax, makes it possible to deliver next-generation services and applications to users in the office, no matter where they are working. However, the higher wireless speeds involved are putting pressure on the cabling infrastructure that supports the Wi-Fi access points around an office environment. The 1 Gbit/s Ethernet was more than adequate for older wireless standards and applications. Now, with greater reliance on the new generation of Wi-Fi access points and their higher uplink rate speeds, the older infrastructure is starting to show strain. At the same time, in the server room itself, demand for high-speed storage and faster virtualized servers is placing pressure on the performance levels offered by the core Ethernet cabling that connects these systems together and to the wider enterprise infrastructure. One option is to upgrade to a 10 Gbit/s Ethernet infrastructure. But this is a migration that can be prohibitively expensive. The Cat 5e cabling that exists in many office and industrial environments is not designed to cope with such elevated speeds. To make use of 10 Gbit/s equipment, that old cabling needs to come out and be replaced by a new copper infrastructure based on Cat 6a standards. Cat 6a cabling can support 10 Gbit/s Ethernet at the full range of 100 meters, and you would be lucky to run 10 Gbit/s at half that distance over a Cat 5e cable. In contrast to data-center environments that are designed to cope easily with both server and networking infrastructure upgrades, enterprise cabling lying in ducts, in ceilings and below floors is hard to reach and swap out. This is especially true if you want to keep the business running while the switchover takes place. Help is at hand with the emergence of the IEEE 802.3bz™ and NBASE-T® set of standards and the transceiver technology that goes with them. 802.3bz and NBASE-T make it possible to transmit at speeds of 2.5 Gbit/s or 5 Gbit/s across conventional Cat 5e or Cat 6 at distances up to the full 100 meters. The transceiver technology leverages advances in digital signal processing (DSP) to make these higher speeds possible without demanding a change in the cabling infrastructure. The NBASE-T technology, a companion to the IEEE 802.3bz standard, incorporates novel features such as downshift, which responds dynamically to interference from other sources in the cable bundle. The result is lower speed. But the downshift technology has the advantage that it does not cut off communication unexpectedly, providing time to diagnose the problem interferer in the bundle and perhaps reroute it to sit alongside less sensitive cables that may carry lower-speed signals. This is where the new generation of high-density transceivers come in. There are now transceivers coming onto the market that support data rates all the way from legacy 10 Mbit/s Ethernet up to the full 5 Gbit/s of 802.3bz/NBASE-T - and will auto-negotiate the most appropriate data rate with the downstream device. This makes it easy for enterprise users to upgrade the routers and switches that support their core network without demanding upgrades to all the client devices. Further features, such as Virtual Cable Tester® functionality, makes it easier to diagnose faults in the cabling infrastructure without resorting to the use of specialized network instrumentation. Transceivers and PHYs designed for switches can now support eight 802.3bz/NBASE-T ports in one chip, thanks to the integration made possible by leading-edge processes. These transceivers are designed not only to be more cost-effective, they also consume far less power and PCB real estate than PHYs that were designed for 10 Gbit/s networks. This means they present a much more optimized solution with numerous benefits from a financial, thermal and a logistical perspective. The result is a networking standard that meshes well with the needs of modern enterprise networks - and lets that network and the equipment evolve at its own pace.
  • June 20, 2017

    Autonomous Vehicles and Digital Features Make the Car of the Future a “Data Center on Wheels"

    By Donna

    Advanced digital features, autonomous vehicles and new auto safety legislation are all amongst the many “drivers” escalating the number of chips and technology found in next-generation automobiles.  The wireless, connectivity, storage and security technologies needed for the internal and external vehicle communications in cars today and in the future, leverage technologies used in a data center—in fact, you could say the automobile is becoming—a Data Center on Wheels. Here are some interesting data points supporting the evolution of the Data Center on Wheels:
    • The National Highway Traffic Safety Administration (NHTSA) mandates that by May 2018, all new cars in the U.S. to have backup cameras. The agency reports that half of all new vehicles sold today already have backup cameras, showing widespread acceptance even without the NHTSA mandate.
    • Some luxury brands provide panoramic 360-degree surround views using multiple cameras. NVIDIA, which made its claim to fame in graphics processing chips for computers and video games, is a leading provider in the backup and surround view digital platforms, translating its digital expertise into the hottest of new vehicle trends. At the latest 2017 International CES, NVIDIA showcased its latest NVIDIA PX2, an Artificial Intelligence (AI) Car Computer for Self-Driving Vehicles, which enables automakers and their tier 1 suppliers to accelerate production of automated and autonomous vehicles.
    • According to an Intel presentation at CES reported in Network World, just one autonomous car will use 4,000GB (or 4 Terabytes) of data per day.
    • A January study by Strategy Analytics reported that by 2020, new cars are expected to have approximately 1,000 chips per vehicle.
    Advanced Driver Assist Systems (ADAS), In-Vehicle Infotainment (IVI), autonomous vehicles—will rely on digital information streamed internally within the vehicle and externally from the vehicle to other vehicles or third-party services via chips, sensors, network and wireless connectivity.  All of this data will need to be processed, stored or transmitted seamlessly and securely, because a LoJack® isn’t necessarily going to help with a car hack. This is why auto makers are turning to the high tech and semiconductor industries to support the move to more digitized, automated cars. Semiconductor leaders in wireless, connectivity, storage, and networking are all being tapped to design and manage the Data Center on Wheels.  For example, Marvell recently announced the first automotive grade system-on-chip (SoC) that integrates the latest Wi-Fi, Bluetooth, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) capabilities.  Another technology product being offered for automotive use is the InnoDisk SATA 3ME4 Solid-State Drive (SSD) series. Originally designed for industrial systems integrations, these storage drives can withstand the varied temperature ranges of a car, as well as shock and vibration under rugged conditions. Both of these products integrate state-of-the-art encryption to not only keep and store information needed for data-driven vehicles, but keep that information secure from unwanted intrusion. Marvell and others are working to form standards and adapt secure digital solutions in wireless, connectivity, networking and storage specifically for the automobile, which is even more paramount in self-driving vehicles. Current data center standards, such as Gigabit Ethernet are being developed for automobiles and the industry is stepping up to help make sure that these Data Centers on Wheels are not only safe, but secure.
  • June 17, 2017

    Marvell Technology Instrumental in Ground-Breaking New Open Source NAS Solution

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    The quantity of data storage that each individual now expects to be able to have access to has ramped up dramatically over the course of the last few years. This has been predominantly fueled by society’s ravenous hunger for various forms of multimedia entertainment and more immersive gaming, plus our growing obsession with taking photos or videos of all manner of things that we experience during an average day. 

    The emergence of the ‘connected home’ phenomenon, along with greater use of wearable technology and the enhanced functionality being incorporated into each new generation of smartphone handset have all contributed to our increasingly data oriented lives. As a result each of us is generating, downloading and transferring larger amounts of data-heavy content than would have been even conceivable a relatively short while in the past. For example, market research firm InfoTrends has estimated that consumers worldwide will be responsible for taking over 1.2 trillion new photos during 2017 (that is more than double the figure from 5 years ago). Furthermore, there are certainly no indications that the dynamics that are driving this will weaken and everything will start to slow down. On the contrary, it is likely that the pace will only continue to accelerate. 

    If individuals are to keep on amassing personal data at current rates, then it is clear that they will need access to a new form of flexible storage solution that is up to the job. In a report compiled by industry analysts at Technavio, the global consumer network attached storage (NAS) market
    is predicted to grow accordingly - witnessing an impressive 11% compound annual growth between now and the end of this decade. 

    Though, it must be acknowledged, that we are shifting an increasing proportion of our overall data storage needs to the cloud, the synching of large media files for use in the home environments can often prove to be impractical, because of latency issues arising. Also there are serious security issues associated with relying on cloud-based storage when it comes to keeping certain personal data and these need to be given due consideration. 

    Start-up company Kobol has recently initiated a crowdfunding campaign to garner financial backing for its Helios4 offering. The first of its kind - this is an open source, open hardware NAS solution that will allow the storing and sharing of music, photos and movies through connection to the user’s home network. It presents consumers with a secure, flexible and rapidly accessible data storage reserve with a capacity of up to 40 TeraBytes (which equates to around 700,000 hours of music, 20,000 hours of movies or 12 million photos). 

    Helios4 has small dimensions. Built-in RAID redundancy is included in order for ongoing reliability to be assured. This means that even if one of the 4 hard drives (each delivering 10 TeraBytes) were to crash, the user’s content would remain safely stored, as the data is mirrored onto another of its drives. The result is a compact, cost effective and energy saving storage solution, which acts like a ‘personal cloud’. 

       Figure 1: Schematic showing the interface structure of Helios4 powered by ARMADA 388 SoC 

     

    Figure 2: The component parts that make up the Helios4 kit 

    Inspired by the open hardware, collaborative philosophy, Helios4 can be supplied as a simple to assemble kit that engineers can then assemble themselves. Otherwise, for those with less engineering experience it comes as a straightforward to use out-of-the-box solution. It offers a high degree of flexibility and a broad array of different connectivity options. 

    At the heart of the Helios4’s design is a sophisticated ARMADA 388 32-bit ARM-based system-on-chip (SoC) from Marvell, which combines high performance benchmarks with power frugal operation. Based on 28nm node, low power semiconductor technology, its dual-core ARM Cortex-A9 processing resource is capable of running at speeds of up to 1.8 GHz. USB 3.0 SuperSpeed and SATA 3.0 ports are included so that elevated connectivity levels can be supported. Cryptographic mechanisms are also integrated to maintain superior system security. 

    By clicking on the following link you can learn more about the Helios4 Kickstarter campaign. For those interested in getting involved, the deadline to make a contribution is 19th June.  

  • June 07, 2017

    Community Platform Allows Easy Adoption of ARM 64-bit in Data Center, Networking and Storage Ecosystems

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    Marvell MACCHIATObin community board is first-of-its-kind, high-end ARM 64-bit networking and storage community board 

    The increasing availability of high-speed internet services is connecting people in novel and often surprising ways, and creating a raft of applications for data centers. Cloud computing, Big Data and the Internet of Things (IoT) are all starting to play a major role within the industry. 

    These opportunities call for innovative solutions to handle the challenges they present, many of which have not been encountered before in IT. The industry is answering that call through technologies and concepts such as software defined networking (SDN), network function virtualization (NFV) and distributed storage. Making the most of these technologies and unleashing the potential of the new applications requires a collaborative approach. The distributed nature and complexity of the solutions calls for input from many different market participants. 

    A key way to foster such collaboration is through open-source ecosystems. The rise of Linux has demonstrated the effectiveness of such ecosystems and has helped steer the industry towards adopting open-source solutions. (Examples: AT&T Runs Open Source White Box Switch in its Live NetworkSnapRoute and Dell EMC to Help Advance Linux Foundation's OpenSwitch ProjectNokia launches AirFrame Data Center for the Open Platform NFV community

    Communities have come together through Linux to provide additional value for the ecosystem. One example is the Linux Foundation Organization which currently sponsors more than 50 open source projects. Its activities cover various parts of the industry from IoT ( IoTivity , EdgeX Foundry ) to full NFV solutions, such as the Open Platform for NFV (OPNFV). This is something that would have been hard to conceive even a couple of years ago without the wide market acceptance of open-source communities and solutions. 

    Although there are numerous important open-source software projects for data-center applications, the hardware on which to run them and evaluate solutions has been in short supply. There are many ARM® development boards that have been developed and manufactured, but they primarily focus on simple applications. 

    All these open source software ecosystems require a development platform that can provide a high-performance central processing unit (CPU), high-speed network connectivity and large memory support. But they also need to be accessible and affordable to ARM developers. Marvell MACCHIATObin® is the first ARM 64-bit community platform for open-source software communities that provides solutions for, among others, SDN, NFV and Distributed Storage. A high-performance ARM 64-bit community platform                                   A high-performance ARM 64-bit community platform 

    The Marvell MACCHIATObin community board is a mini-ITX form-factor ARM 64-bit network and storage oriented community platform. It is based on the Marvell hyperscale SBSA-compliant ARMADA® 8040 system on chip (SoC) that features four high-performance Cortex®-A72 ARM 64-bit CPUs. ARM Cortex-A72 CPU is the latest and most powerful ARM 64-bit CPU available and supports virtualization, an increasingly important aspect for data center applications. 

    Together with the quad-core platform, the ARMADA 8040 SoC provides two 10G Ethernet interfaces, three SATA 3.0 interfaces and support for up to 16GB of DDR4 memory to handle highly complex applications. This power does not come at the cost of affordability: the Marvell MACCHIATObin community board is priced at $349. As a result, the Marvell MACCHIATObin community board is the first affordable high-performance ARM 64-bit networking and storage community platform of its kind.

    CPU

    SolidRun (https://www.solid-run.com/) started shipping the Marvell MACCHIATObin community board in March 2017, providing an early access of the hardware to open-source communities. 

    The Marvell MACCHIATObin community board is easy to deploy. It uses the compact mini-ITX form factor, enabling developers to purchase one of the many cases based on the popular standard mini-ITX case to meet their requirements. The ARMADA 8040 SoC itself is SBSA-compliant (http://infocenter.arm.com/help/topic/com.arm.doc.den0029/) to offer unified extensible firmware interface (UEFI) support. 

    The ARMADA 8040 SoC includes an advanced network packet processor that supports features such as parsing, classification, QoS mapping, shaping and metering. In addition, the SoC provides two security engines that can perform full IPSEC, DTL and other protocol-offload functions at 10G rates. To handle high-performance RAID 5/6 support, the ARMADA 8040 SoC employs high-speed DMA and XOR engines. 

    For hardware expansion, the Marvell MACCHIATObin community board provides one PCIex4 3.0 slot and a USB3.0 host connector. For non-volatile storage, options include a built-in eMMC device and a micro-SD card connector. Mass storage is available through three SATA 3.0 connectors. For debug, developers can access the board’s processors through a choice of a virtual UART running over the microUSB connector, 20-pin connector for JTAG access or two UART headers. The Marvell MACCHIATObin community board technical specifications can be found here: MACCHIATObin Specification

    Open source software enables advanced applications 

    The Marvell MACCHIATObin community board comes with rich open source software that includes ARM Trusted Firmware (ATF)U-BootUEFILinux KernelYoctoOpenWrtOpenDataPlane (ODP) , Data Plane Development Kit (DPDK)netmap and others; many of the Marvell MACCHIATObin open source software core components are available at: https://github.com/orgs/MarvellEmbeddedProcessors/

    To provide the Marvell MACCHIATObin community board with ready-made support for the open-source platforms used at the edge and data centers for SDN, NFV and similar applications, standard operating systems like Suse Linux EnterpriseCentOSUbuntu and others should boot and run seamlessly on the Marvell MACCHIATObin community board. 

    As the ARMADA 8040 SoC is SBSA compliant and supports UEFI with ACPI, along with Marvell’s upstreaming of Linux kernel support, standard operating systems can be enabled on the Marvell MACCHIATObin community board without the need of special porting. 

    On top of this core software, a wide variety of ecosystem applications needed for the data center and edge applications can be assembled. 

    For example, using the ARMADA 8040 SoC high-speed networking and security engine will enable the kernel netdev community to develop and maintain features such as XDP or other kernel network features on ARM 64-bit platforms. The ARMADA 8040 SoC security engine will enable many other Linux kernel open-source communities to implement new offloads. 

    Thanks to the virtualization support available on the ARM Cortex A72 processors, virtualization technology projects such as KVM and XEN can be enabled on the platform; container technologies like LXC  and Docker can also be enabled to maximize data center flexibility and enable a virtual CPE ecosystem where the Marvell MACCHIATObin community board can be used to develop edge applications on a 64-bit ARM platform. 

    In addition to the mainline Linux kernel, Marvell is upstreaming U-Boot and UEFI, and is set to upstream and open the Marvell MACCHIATObin ODP and DPDK support. This makes the Marvell MACCHIATObin board an ideal community platform for both communities, and will open the door to related communities who have based their ecosystems on ODP or DPDK. These may be user-space network-stack communities such as OpenFastPath and FD.io or virtual switching technologies that can make use of both the ARMADA 8040 SoC virtualization support and networking capabilities such as Open vSwitch (OVS) or Vector Packet Processing (VPP).  Similar to ODP and DPDK, Marvell MACCHIATObin netmap support can enable VALE virtual switching technology or security ecosystem such as pfsense.

    CPU2

    Thanks to its hardware features and upstreamed software support, the Marvell MACCHIATObin community board is not limited to data center SDN and NFV applications. It is highly suited as a development platform for network and security products and applications such as network routers, security appliances, IoT gateways, industrial computing, home customer-provided equipment (CPE) platforms and wireless backhaul controllers; a new level of scalable and modular solutions can be further achieved when combining the Marvell MACCHIATObin community board with Marvell switches and PHY products. 

    Summary 

    The Marvell MACCHIATObin is the first of its kind: a high-performance, cost-effective networking community platform. The board supports a rich software ecosystem and has made available high-performance, high-speed networking ARM 64-bit community platforms at a price that is affordable for the majority of ARM developers, software vendors and other interested companies. It makes ARM 64-bit far more accessible than ever before for developers of solutions for use in data centers, networking and storage.  

     

  • May 31, 2017

    Further Empowerment of the Wireless Office

    By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

    In order to benefit from the greater convenience offered for employees and more straightforward implementation, office environments are steadily migrating towards wholesale wireless connectivity. Thanks to this, office staff will no longer be limited by where there are cables/ports available, resulting in a much higher degree of mobility. It will mean that they can remain constantly connected and their work activities won’t be hindered - whether they are at their desk, in a meeting or even in the cafeteria. This will make enterprises much better aligned with our modern working culture - where hot desking and bring your own device (BYOD) are becoming increasingly commonplace. 

    The main dynamic which is going to be responsible for accelerating this trend will be the emergence of 802.11ac Wave 2 Wi-Fi technology. With the prospect of exploiting Gigabit data rates (thereby enabling the streaming of video content, faster download speeds, higher quality video conferencing, etc.), it is clearly going to have considerable appeal. In addition, this protocol offers extended range and greater bandwidth through multi-user MIMO operation - so that a larger number of users can be supported simultaneously. This will be advantageous to the enterprise, as less access points per users will be required. Pipe

    An example of the office floorplan for an enterprise/campus is described in Figure 1 (showing a large number of cubicles and also some meeting rooms too). Though scenarios vary, generally speaking an enterprise/campus is likely to occupy a total floor space of between 20,000 and 45,000 square feet. With one 802.11ac access point able to cover an area of 3000 to 4000 square feet, a wireless office would need a total of about 8 to 12 access points to be fully effective. This density should be more than acceptable for average voice and data needs. Supporting these access points will be a high capacity wireline backbone.

    Increasingly, rather than employing traditional 10 Gigabit Ethernet infrastructure, the enterprise/campus backbone is going to be based on 25 Gigabit Ethernet technology. It is expected that this will see widespread uptake in newly constructed office buildings over the next 2-3 years as the related optics continue to become more affordable. Clearly enterprises want to tap into the enhanced performance offered by 802.11ac, but they have to do this while also adhering to stringent budgetary constraints too. As the data capacity at the backbone gets raised upwards, so will the complexity of the hierarchical structure that needs to be placed underneath it, consisting of extensive intermediary switching technology. Well that’s what conventional thinking would tell us. 

    Before embarking on a 25 Gigabit Ethernet/802.11ac implementation, enterprises have to be fully aware of what all this entails. As well as the initial investment associated with the hardware heavy arrangement just outlined, there is also the ongoing operational costs to consider. By aggregating the access points into a port extender that is then connecting directly to the 25 Gigabit Ethernet backbone instead towards a central control bridge switch, it is possible to significantly simplify the hierarchical structure - effectively eliminating a layer of unneeded complexity from the system. 

    Through its Passive Intelligent Port Extender (PIPE) technology Marvell is doing just that. This product offering is unique to the market, as other port extenders currently available were not originally designed for that purpose and therefore exhibit compromises in their performance, price and power. PIPE is, in contrast, an optimized solution that is able to fully leverage the IEEE 802.1BR bridge port extension standard - dispensing with the need for expensive intermediary switches between the control bridge and the access point level and reducing the roll-out costs as a result. It delivers markedly higher throughput, as the aggregating of multiple 802.11ac access points to 10 Gigabit Ethernet switches has been avoided. With fewer network elements to manage, there is some reduction in terms of the ongoing running costs too. 

    PIPE means that enterprises can future proof their office data communication infrastructure - starting with 10 Gigabit Ethernet, then upgrading to a 25 Gigabit Ethernet when it is needed. The number of ports that it incorporates are a good match for the number of access points that an enterprise/campus will need to address the wireless connectivity demands of their work force. It enables dual homing functionality, so that elevated service reliability and resiliency are both assured through system redundancy. In addition, supporting Power-over-Ethernet (PoE), allows access points to connect to both a power supply and the data network through a single cable - further facilitating the deployment process.

  • May 23, 2017

    Marvell MACCHIATObin Community Board Now Shipping

    By Maen Suleiman, Senior Software Product Line Manager, Marvell

    First-of-its-kind community platform makes ARM-64bit accessible for data center, networking and storage solutions developers

    As network infrastructure continues to transition to Software-Defined Networking (SDN) and Network Functions Virtualization (NFV), the industry is in great need of cost-optimized hardware platforms coupled with robust software support for the development of a variety of networking, security and storage solutions. The answer is finally here! 

    Now, with the shipping of the Marvell MACCHIATObin™ community board, developers and companies have access to a high-performance, affordable ARM®-based platform with the required technologies such as an ARMv8 64bit CPU, virtualization, high-speed networking and security accelerators, and the added benefit of open source software. SolidRun started shipping the Marvell MACCHIATObin community board in March 2017, providing an early access of the hardware to open-source communities.

    MacchiatobinDiagram_vFNL

    Click image to enlarge

    The Marvell MACCHIATObin community board is a mini-ITX form-factor ARMv8 64bit network- and storage-oriented community platform. It is based on the Marvell® hyperscale SBSA-compliant ARMADA® 8040 system on chip (SoC) (http://www.marvell.com/embedded-processors/armada-80xx/) that features quad-core high-performance Cortex®-A72 ARM 64bit CPUs 

    Together with the quad-core Cortex-A72 ARM64bit CPUs, the Marvell MACCHIATObin community board provides two 10G Ethernet interfaces, three SATA 3.0 interfaces and support for up to 16GB of DDR4 memory to handle higher performance data center applications. This power does not come at the cost of affordability: the Marvell MACCHIATObin community board is priced at $349. As a result, it is the first affordable high-performance ARM 64bit networking and storage community platform of its kind. 

    The Marvell MACCHIATObin community board is easy to deploy. It uses the compact mini-ITX form factor enabling developers and companies to purchase one of the many cases based on the popular standard mini-ITX case to meet their requirements. The ARMADA 8040 SoC itself is SBSA- compliant to offer unified extensible firmware interface (UEFI) support. You can find the full specification at: http://wiki.macchiatobin.net/tiki-index.php?page=About+MACCHIATObin

    To provide the Marvell MACCHIATObin community board with ready-made support for the open-source platforms used in SDN, NFV and similar applications, Marvell is upstreaming MACCHIATObin software support to the Linux kernelU-Boot and UEFI, and is set to upstream and open the Marvell MACCHIATObin community board for ODP and DPDK support. 

    In addition to upstreaming the MACCHIATObin software support, Marvell added MACCHIATObin support to the ARMADA 8040 SDK and plans to make the ARMADA 8040 SDK publicly available. Many of the ARMADA 8040 SDK components are available at: https://github.com/orgs/MarvellEmbeddedProcessors/

    For more information about the many innovative features of the Marvell MACCHIATObin community board, please visit: http://wiki.macchiatobin.net.  To place an order for the Marvell MACCHIATObin community board, please go to: http://macchiatobin.net/.

  • April 28, 2017

    Challenges of Autonomous Vehicles: How Ethernet in Automobiles Can Overcome Bandwidth Issues in Self-Driving Vehicles

    By Nick Ilyadis

    Drivers are already getting used to what used to be “cool new features,” that have now become “can’t live without” technologies, such as the backup camera, blind spot alert or parking assist. Each of these technologies stream information, or data, within the car, and as automotive technology evolves, more and more features will be added. But when it comes to autonomous vehicles, the amount of technology and data streams coming into the car to be processed increases exponentially. Autonomous vehicles gather multiple streams of information/data from sensors, radar, radios, IR sensors and cameras. This goes beyond the current Advanced Driver Assist Systems (ADAS) or In-Vehicle Infotainment (IVI). The autonomous car will be acutely aware of its surroundings running sophisticated algorithms that will make decisions in order to drive the vehicle. However, self-driving cars will also be processing vehicle-to-vehicle communications, as well as connecting to a number of external devices that will be installed in the highway of the future, as automotive communication infrastructures develop. All of these features and processes require bandwidth-and a lot of it: Start the car; drive; turn; red light, stop; - PEDESTRIAN - BRAKE! This would be a very bad time for the internal vehicle networks to run out of bandwidth.

     Add to the driving functions the simultaneous infotainment streams for each passenger, vehicle Internet capabilities, etc. and the current 100 megabits-per-second (mbps) 100BASE-T1 Ethernet bandwidth used in automotive, is quickly strained. This is paving the way (pun intended) for 1000BASE-T1 Gigabit Ethernet (GbE) for automotive networks. Ethernet has long been the economical volume workhorse with millions of miles of cabling in buildings the world over. Therefore, the IEEE 802.3 Ethernet Working Group has endorsed iGbE as the next network bandwidth standard in automotive.

    From Car-jacking to Car-hacking—Security Critical 

    Another major factor for automotive networking is security. In addition to the many technology features and processes needed for driving and entertainment, security is a major concern for cars, especially autonomous cars.  Science Fiction movies where cars are hacked overriding the driver’s capabilities are scary enough, but in real life, would be beyond a nightmare. Automotive security to prevent spyware, whether planted from a rogue mechanic or roving hack, will require strong authentication to protect privacy, and passenger safety. Cars of the future will be able to reject any devices added that aren’t authenticated, as well as any external intrusion through the open communication channels of the vehicle.

    This is why companies like Marvell, have taken a leadership role with organizations like IEEE to help create open standards, such as GbE for automotive, to keep moving automotive technologies forward. (See IEEE 2014 Automotive Day presentation by Alex Tan on the Benefits of Designing 1000BASE-T1 into Automotive Architectures http://standards.ieee.org/events/automotive/2014/02_Designing_1000BASE-T1_Into_Automotive_Architectures.pdf.)

    Technology to Drive Next-Generation Automotive Networking

    Marvell’s Automotive Ethernet Networking technology is capable of taking what used to be the separate domains of the car — infotainment, driver assist, body electronics and control — and connecting them together to provide a high-bandwidth standards-based data backbone for the vehicle. For example, the Marvell 88Q2112 is the industry’s first 1000BASE-T1 automotive Ethernet PHY transceiver compliant with the IEEE 802.3bp 1000BASE-T1 standard. The Marvell 88Q2112 supports the market’s highest in-vehicle connectivity bandwidth and is designed to meet the rigorous EMI requirements of an automotive system. The 1000BASE-T1 standard allows high-speed and bi-directional data traffic and in-vehicle uncompressed 720p30 camera video for multiple HD video streams, including 4K resolution, all over a lightweight, low-cost single pair cable. The Marvell 88Q1010 low-power PHY device supports 100BASE-T1 and compressed 1080p60 video for infotainment, data transport and camera systems.  And finally to round out its automotive networking solutions, Marvell also offers a series of 7-port Ethernet switches.

    Harnessing the low cost and high bandwidth of Ethernet brings many advantages to next-generation automotive architecture, including the flexibility to add new applications. In other words, allowing the possibility to build for features that haven’t even been thought up yet. Because while the car of the future may drive itself, it takes a consortium of technology leaders to pave the way.

    # # #

         

  • April 27, 2017

    The Challenges Of 11ac Wave 2 and 11ax in Wi-Fi Deployments: How to Cost-Effectively Upgrade to 2.5GBASE-T and 5GBASE-T

    By Nick Ilyadis

    The Insatiable Need for Bandwidth: Standards Trying to Keep Up

    With the push for more and more Wi-Fi bandwidth, the WLAN industry, its standards committees and the Ethernet switch manufacturers are having a hard time keeping up with the need for more speed. As the industry prepares for upgrading to 802.11ac Wave 2 and the promise of 11ax, the ability of Ethernet over existing copper wiring to meet the increased transfer speeds is being challenged. And what really can’t keep up are the budgets that would be needed to physically rewire the millions of miles of cabling in the world today.

    The Latest on the Latest Wireless Networking Standards: IEEE 802.11ac Wave 2 and 802.11ax

    The latest 802.11ac IEEE standard is now in Wave 2. According to Webopedia’s definition: the 802.11ac -2013 update, or 802.11ac Wave 2, is an addendum to the original 802.11ac wireless specification that utilizes Multi-User, Multiple-Input, Multiple-Output (MU-MIMO) technology and other advancements to help increase theoretical maximum wireless speeds from 3.47 gigabits-per-second (Gbps), in the original spec, to 6.93 Gbps in 802.11ac Wave 2. The original 802.11ac spec itself served as a performance boost over the 802.11n specification that preceded it, increasing wireless speeds by up to 3x. As with the initial specification, 802.11ac Wave 2 also provides backward compatibility with previous 802.11 specs, including 802.11n.

    IEEE has also noted that in the past two decades, the IEEE 802.11 wireless local area networks (WLANs) have also experienced tremendous growth with the proliferation of IEEE 802.11 devices, as a major Internet access for mobile computing. Therefore, the IEEE 802.11ax specification is under development as well.  Giving equal time to Wikipedia, its definition of 802.11ax is: a type of WLAN designed to improve overall spectral efficiency in dense deployment scenarios, with a predicted top speed of around 10 Gbps. It works in 2.4GHz or 5GHz and in addition to MIMO and MU-MIMO, it introduces Orthogonal Frequency-Division Multiple Access (OFDMA) technique to improve spectral efficiency and also higher order 1024 Quadrature Amplitude Modulation (QAM) modulation support for better throughputs. Though the nominal data rate is just 37 percent higher compared to 802.11ac, the new amendment will allow a 4X increase of user throughput. This new specification is due to be publicly released in 2019.

    Faster “Cats” Cat 5, 5e, 6, 6e and on

    And yes, even cabling is moving up to keep up. You’ve got Cat 5, 5e, 6, 6e and 7 (search: Differences between CAT5, CAT5e, CAT6 and CAT6e Cables for specifics), but suffice it to say, each iteration is capable of moving more data faster, starting with the ubiquitous Cat 5 at 100Mbps at 100MHz over 100 meters of cabling to Cat 6e reaching 10,000 Mbps at 500MHz over 100 meters. Cat 7 can operate at 600MHz over 100 meters, with more “Cats” on the way. All of this of course, is to keep up with streaming, communications, mega data or anything else being thrown at the network.

    How to Keep Up Cost-Effectively with 2.5BASE-T and 5BASE-T

    What this all boils down to is this: no matter how fast the network standards or cables get, the migration to new technologies will always be balanced with the cost of attaining those speeds and technologies in the physical realm. In other words, balancing the physical labor costs associated to upgrade all those millions of miles of cabling in buildings throughout the world, as well as the switches or other access points. The labor costs alone, are a reason why companies often seek out to stay in the wiring closet as long as possible, where the physical layer (PHY) devices, such access and switches, remain easier and more cost effective to switch out, than replacing existing cabling.

    This is where Marvell steps in with a whole solution. Marvell’s products, including the Avastar wireless products, Alaska PHYs and Prestera switches, provide an optimized solution that will help support up to 2.5 and 5.0 Gbps speeds, using existing cabling. For example, the Marvell Avastar 88W8997 wireless processor was the industry's first 28nm, 11ac (wave-2), 2x2 MU-MIMO combo with full support for Bluetooth 4.2, and future BT5.0. To address switching, Marvell created the Marvell® Prestera® DX family of packet processors, which enables secure, high-density and intelligent 10GbE/2.5GbE/1GbE switching solutions at the access/edge and aggregation layers of Campus, Industrial, Small Medium Business (SMB) and Service Provider networks. And finally, the Marvell Alaska family of Ethernet transceivers are PHY devices which feature the industry's lowest power, highest performance and smallest form factor.

    These transceivers help optimize form factors, as well as multiple port and cable options, with efficient power consumption and simple plug-and-play functionality to offer the most advanced and complete PHY products to the broadband market to support 2.5G and 5G data rate over Cat5e and Cat6 cables.

    You mean, I don’t have to leave the wiring closet?

    The longer changes can be made at the wiring closet vs. the electricians and cabling needed to rewire, the better companies can balance faster throughput at lower cost. The Marvell Avastar, Prestera and Alaska product families are ways to help address the upgrade to 2.5G- and 5GBASE-T over existing copper wire to keep up with that insatiable demand for throughput, without taking you out of the wiring closet. See you inside!

    # # #

  • April 27, 2017

    Top Eight Data Center Trends For Keeping up with High Data Bandwidth Demand

    By Nick Ilyadis, VP of Portfolio Technology, Marvell

    IoT devices, online video streaming, increased throughput for servers and storage solutions – all have contributed to the massive explosion of data circulating through data centers and the increasing need for greater bandwidth. IT teams have been chartered with finding the solutions to support higher bandwidth to attain faster data speeds, yet must do it in the most cost-efficient way - a formidable task indeed. Marvell recently shared with eWeek about what it sees as the top trends in data centers as they try to keep up with the unprecedented demand for higher and higher bandwidth. Below are the top eight data center trends Marvell has identified as IT teams develop the blueprint for achieving high bandwidth, cost-effective solutions to keep up with explosive data growth.

       CloudComputing 

    1.) Higher Adoption of 25GbE

    To support this increased need for high bandwidth, companies are evaluating whether to adopt 40GbE to the server as an upgrade from 10GbE. 25GbE provides more cost effective throughput than 40GbE since 40GbE requires more power and costlier cables. Therefore, 25GbE is becoming acknowledged as an optimal next-generation Ethernet speed for connecting servers as data centers seek to balance cost/performance tradeoffs.

    2.) The Ability to Bundle and Unbundle Channels

    Historically, data centers have upgraded to higher link speeds by aggregating multiple single-lane 10GbE network physical layers. Today, 100Gbps can be achieved by bundling four 25Gbps links together or alternatively, 100GbE can also be unbundled into four independent 25GbE channels. The ability to bundle and unbundle 100GbE gives IT teams wider flexibility in moving data across their network and in adapting to changing customer needs.

    3.)  Big Data Analytics 

    Increased data means increased traffic. Real-time analytics allow organizations to monitor and make adjustments as needed to effectively allocate precious network bandwidth and resources. Leveraging analytics has become a key tool for data center operators to maximize their investment. datacenter2

     4.) Growing Demand for Higher-Density Switches

    Advances in semiconductor processes to 28nm and 16nm have allowed network switches to become smaller and smaller. In the past, a 48-port switch required two chips with advanced port configurations. But today, the same result can be achieved on a single chip, which not only keeps costs down, but improves power efficiency.

    5.) Power Efficiency Needed to Keep Costs Down

    Energy costs are often among the highest costs incurred by data centers.  Ethernet solutions designed with greater power efficiency help data centers transition to the higher GbE rates needed to keep up with the higher bandwidth demands, while keeping energy costs in check. datacenter3    

    6.) More Outsourcing of IT to the Cloud 

    IT organizations are not only adopting 25GbE to address increasing bandwidth demands, they are also turning to the cloud. By outsourcing IT to the cloud, organizations are able to secure more space on their network, while maintaining bandwidth speeds.

    7.) Using NVM Express-based Storage to Maximize Performance 

    NVM Express® (NVMe™) is a scalable host controller interface designed to address the needs of enterprise, data center and client systems that utilize PCI-e based solid-state drives (SSDs.) By using the NVMe protocol, data centers can exploit the full performance of SSDs, creating new compute models that no longer have the limitations of legacy rotational media. SSD performance can be maximized, while server clusters can be enabled to pool storage and share data access throughout the network. datacenter4 8.) Transition from Servers to Network Storage With the growing amount of data transferred across networks, more data centers are deploying storage on networks vs. servers. Ethernet technologies are being leveraged to attach storage to the network instead of legacy storage interconnects as the data center transitions from a traditional server model to networked storage.

     As shown above, IT teams are using a variety of technologies and methods to keep up with the explosive increase in data and higher needs for data center bandwidth. What methods are you employing to keep pace with the ever-increasing demands on the data center, and how do you try to keep energy usage and costs down?

    # # #

  • April 03, 2017

    How the Introduction of the Cell Phone Sparked Today’s Data Demands

    By Sander Arts, Interim VP of Marketing, Marvell

    Almost 44 years ago on April 3, 1973, an engineer named Martin Cooper walked down a street in Manhattan with a brick-shaped device in his hand and made history’s very first cell phone call. Weighing an impressive 2.5 pounds and standing 11 inches tall, the world’s first mobile device featured a single-line, text-only LED display screen. 

    Credit: Wikipedia

    A lot has changed since then. Phones have gotten smaller, faster and smarter, innovating at a pace that would have been unimaginable four decades ago. Today, phone calls are just one of the many capabilities that we expect from our mobile devices, in addition to browsing the internet, watching videos, finding directions, engaging in social media and more. All of these activities require the rapid movement and storage of data, drawing closer parallels to the original PC than Cooper’s 2.5 pound prototype. And that’s only the beginning – the demand for data has expanded far past mobile. 

    Data Demands: to Infinity and Beyond!

    Today’s consumers can access content from around the world almost instantaneously using a variety of devices, including smartphones, tablets, cars and even household appliances. Whether it’s a large-scale event such as Super Bowl LI or just another day, data usage is skyrocketing as we communicate with friends, family and strangers across the globe sharing ideas, uploading pictures, watching videos, playing games and much more. 

    According to a study by Domo, every minute in the U.S. consumers use over 18 million megabytes of wireless data. At the recent 2017 OCP U.S. Summit, Facebook shared that over 95 million photos and videos are posted on Instagram every day – and that’s only one app.  As our world becomes smarter and more connected, data demands will only continue to grow.

       Credit: Marvell

    The Next Generation of Data Movement and Storage

    At Marvell, we’re focused on helping our customers move and store data securely, reliably and efficiently as we transform data movement and storage across a range markets from the consumer to the cloud. With the staggering amount of data the world creates and moves every day, it’s hard to believe the humble beginnings of the technology we now take for granted. 

    What data demands will our future devices be tasked to support? Tweet us at @marvellsemi and let us know what you think!

  • March 17, 2017

    Three Days, Two Speaking Sessions and One New Product Line: Marvell Sets the (IEEE 802.1BR) Standard for Data Center Solutions at the 2017 OCP U.S. Summit

    By Michael Zimmerman, Vice President and General Manager, CSIBU, Marvell

    At last week’s 2017 OCP U.S. Summit, it was impossible to miss the buzz and activity happening at Marvell’s booth. Taking our mantra #MarvellOnTheMove to heart, the team worked tirelessly throughout the week to present and demo Marvell’s vision for the future of the data center, which came to fruition with the launch of our newest Prestera® PX Passive Intelligent Port Extender (PIPE) family. 

    But we’re getting ahead of ourselves… DSC00242 Marvell kicked off OCP with two speaking sessions from its leading technologists. Yaniv Kopelman, Networking CTO of the Networking Group, presented "Extending the Lifecycle of 3.2T Switches,” a discussion on the concept of port extender technology and how to apply it to future data center architecture. Michael Zimmerman, vice president and general manager of the Networking Group, then spoke on "Modular Networking" and teased Marvell's first modular solution based on port extender technology. 

    Throughout the show, customers, media and attendees visited Marvell’s booth to see our breakthrough innovations that are leading the disaggregation of the cloud network infrastructure industry. These products included: 

    Marvell's Prestera PX PIPE family purpose-built to reduce power consumption, complexity and cost in the data center 

    Marvell’s 88SS1092 NVMe SSD controller designed to help boost next-generation storage and data center systems 

    Marvell’s Prestera 98CX84xx switch family designed to help data centers break the 1W per 25G port barrier for 25G Top-of-Rack (ToR) applications 

    Marvell’s ARMADA® 64-bit ARM®-based modular SoCs developed to improve the flexibility, performance and efficiency of servers and network appliances in the data center 

    Marvell’s Alaska® C 100G/50G/25G Ethernet transceivers which enable low-power, high-performance and small form factor solutions 

    We’re especially excited to introduce our PIPE solution on the heels of OCP because of the dramatic impact we anticipate it will have on the data center… 

    Until now, data centers with 10GbE and 25GbE port speeds have been challenged with achieving lower operating expense (OPEX) and capital expenditure (CAPEX) costs as their bandwidth needs increase. As the industry’s first purpose-built port extender supporting the IEEE 802.1BR standard, Marvell’s PIPE solution is a revolutionary approach that makes it possible to deploy ToR switches at half the power and cost of a traditional Ethernet switch. 

    Marvell’s PIPE solution enables data centers to be architected with a very simple, low-cost, low-power port extender in place of a traditional ToR switch, pushing the heavy switching functionality upstream. As the industry today transitions from 10GbE to 25GbE and from 40GbE to 100GbE port speeds, data centers are also in need of a modular building block to bridge the variety of current and next-generation port speeds. Marvell’s PIPE family provides a flexible and scalable solution to simplify and accelerate such transitions, offering multiple configuration options of Ethernet connectivity for a range of port speeds and port densities.

    PIPE-Data

    Amidst all of the announcements, speaking sessions and demos, our very own George Hervey, principal architect, also sat down with Semiconductor Engineering’s Ed Sperling for a Tech Talk. In the white board session, George discussed the power efficiency of networking in the enterprise and how costs can be saved by rightsizing Ethernet equipment. 

    The 2017 OCP U.S. Summit was filled with activity for Marvell, and we can’t wait to see how our customers benefit from our suite of data center solutions. In the meantime, we’re here to help with all of your data center needs, questions and concerns as we watch the industry evolve. 

    What were some of your OCP highlights? Did you get a chance to stop by the Marvell booth at the show? Tweet us at @marvellsemi to let us know, and check out all of the activity from last week. We want to hear from you!

  • March 13, 2017

    Port Extender Technology Changes Network Switch Landscape

    By George Hervey, Principal Architect, Marvell

    PIPE-Data-Center_V21-sized Our lives are increasingly dependent on cloud-based computing and storage infrastructure. Whether at home, at work, or on the move with our smartphones and other mobile computing devices, cloud compute and storage resources are omnipresent. It is no surprise therefore that the demands on such infrastructure are growing at an alarming rate, especially as the trends of big data and the internet of things start to make their impact. With an increasing number of applications and users, the annual growth rate is believed to be 30x per annum, and even up to 100x in some cases. Such growth leaves Moore’s law and new chip developments unable to keep up with the needs of the computing and network infrastructure. These factors are making the data and communication network providers invest in multiple parallel computing and storage resources as a way of scaling to meet demands. It is now common for cloud data centers to have hundreds if not thousands of servers that need to be connected together. 

    Interconnecting all of these compute and storage appliances is becoming a real challenge, as more and more switches are required. Within a data center a classic approach to networking is a hierarchical one, with an individual rack using a leaf switch – also termed a top-of-rack or ToR switch – to connect within the rack, a spine switch for a series of racks, and a core switch for the whole center. And, like the servers and storage appliances themselves, these switches all need to be managed. In the recent past there have usually been one or two vendors of data center network switches and the associated management control software, but things are changing fast. Most of the leading cloud service providers, with their significant buying power and technical skills, recognised that they could save substantial cost by designing and building their own network equipment. Many in the data center industry saw this as the first step in disaggregating the network hardware and the management software controlling it. With no shortage of software engineers, the cloud providers took the management software development in-house while outsourcing the hardware design. While that, in part, satisfied the commercial needs of the data center operators, from a technical and operational management perspective nothing has been simplified, leaving a huge number of switches to be managed. 

    The first breakthrough to simplify network complexity came in 2009 with the introduction of what we know now as a port extender. The concept rests on the belief that there are many nodes in the network that don’t need the extensive management capabilities most switches have. Essentially this introduces a parent/child relationship, with the controlling switch, the parent, being the managed switch and the child, the port extender, being fed from it. This port extender approach was ratified into the networking standard 802.1BR in 2012, and every network switch built today complies with this standard. With less technical complexity within the port extenders, there were perceived benefits that would come from lower per unit cost compared to a full bridge switch, in addition to power savings. 

    The controlling bridge and port extender approach has certainly helped to drive simplicity into network switch management, but that’s not the end of the story. Look under the lid of a port extender and you’ll find the same switch chip being used as in the parent bridge. We have moved forward, sort of. Without a chip specifically designed as a port extender switch vendors have continued to use their standard chips sets, without realising potential cost and power savings. However, the truly modular approach to network switching has taken a leap forward with the launch of Marvell’s 802.1BR compliant port extender IC termed PIPE – passive intelligent port extender, enabling interoperability with a controlling bridge switch from any of the industry’s leading OEMs. It also offers attractive cost and power consumption benefits, something that took the shine off the initial interest in port extender technology. Seen as the second stage of network disaggregation, this approach effectively leads to decoupling the port connectivity from the processing power in the parent switch, creating a far more modular approach to networking. The parent switch no longer needs to know what type of equipment it is connecting to, so all the logic and processing can be focused on the parent, and the port I/O taken care of in the port extender. 

    Marvell’s Prestera® PIPE family targets data centers operating at 10GbE and 25GbE speeds that are challenged to achieve lower CAPEX and OPEX costs as the need for bandwidth increases. The Prestera PIPE family will facilitate the deployment of top-of-rack switches at half the cost and power consumption of a traditional Ethernet switch. The PIPE approach also includes a fast fail over and resiliency function, essential for providing continuity and high availability to critical infrastructure.

  • March 08, 2017

    NVMe-based Work Fabrics Blow Through Legacy Rotational Media Limitations in the Data Center: Speed and Cost Benefits of NVMe SSD Shared Storage Now in Its Second Generation

    By Nick Ilyadis, VP of Portfolio Technology, Marvell

    Marvell Debuts 88SS1092 Second-Gen NVM Express SSD Controller at OCP Summit  

    88SS1092_C-sized SSDs in the Data Center: NVMe and Where We’ve Been 

    When solid-state drives (SSDs) were first introduced into the data center, the infrastructure mandated they work within the confines of the then current bus technology, such as Serial ATA (SATA) and Serial Attached SCSI (SAS), developed for rotational media. Even the fastest hard disk drives (HDDs) of course, couldn’t keep up with an SSD, but neither could their current pipelines, which created a bottleneck that hampered the full exploitation of SSD technology. PCI Express (PCIe) offered a suitable high-bandwidth bus technology already in place as a transport layer for networking, graphics and other add-in cards. It became the next viable option, but the PCIe interface still relied on old HDD-based SCSI or SATA protocols. Thus the NVM Express (NVMe) industry working group was formed to create a standardized set of protocols and commands developed for the PCIe bus, in order to allow multiple paths that could take advantage of the full benefits of SSDs in the data center. The NVMe specification was designed from the ground up to deliver high-bandwidth and low-latency storage access for current and future NVM technologies. 

    The NVMe interface provides an optimized command issue and completion path. It includes support for parallel operation by supporting up to 64K commands within a single I/O queue to the device. Additionally, support was added for many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting and virtualization. All-in-all, NVMe is a scalable host controller interface designed to address the needs of Enterprise, Data Center and Client systems that utilize PCIe-based solid-state drives to help maximize SSD performance. 

    SSD Network Fabrics 

    New NVMe controllers from companies like Marvell allowed the data center to share storage data to further maximize cost and performance efficiencies. By creating SSD network fabrics, a cluster of SSDs can be formed to pool storage from individual servers and maximize overall data center storage. In addition, by creating a common enclosure for additional servers, data can be transported for shared data access. These new compute models therefore allow data centers to not only fully optimize the fast performance of SSDs, but more economically deploy those SSDs throughout the data center, lowering overall cost and streamlining maintenance. Instead of adding additional SSDs to individual servers, under-deployed SSDs can be tapped into and redeployed for use by over-allocated servers. 

    Here’s a simple example of how these network fabrics work: If a system has ten servers, each with an SSD sitting on the PCIe bus, an SSD cluster can be formed from each of the SSDs to provide not only a means for additional storage, but also a method to pool and share data access. If, let’s say one server is only 10 percent utilized, while another is over allocated, that SSD cluster will allow more storage for the over-allocated server without having to add SSDs to the individual servers. When the example is multiplied by hundreds of servers, you can see that cost, maintenance and performance efficiencies skyrocket. 

    Marvell helped pave the way for these new types of compute models for the data center when it introduced its first NVMe SSD controller. That product supported up to four lanes of PCIe 3.0, and was suitable for full 4GB/s or 2GB/s end points depending on host system customization. It enabled unparalleled IOPS performance using the NVMe advanced Command Handling. In order to fully utilize the high-speed PCIe connection, Marvell’s innovative NVMe design facilitated PCIe link data flows by deploying massive hardware automation. This helped to alleviate the legacy host control bottlenecks and unleash the true Flash performance. 

    Second-Generation NVMe Controllers are Here! 

    This first product has now been followed up with the introduction of the Marvell 88SS1092 second-generation NVMe SSD controller, which has passed through in-house SSD validation and third-party OS/platform compatibility testing. Therefore, the Marvell® 88SS1092 is ready to go to boost next-generation Storage and Datacenter systems, and is being debuted at the Open Computing Project (OCP) Summit March 8 and 9 in San Jose, Calif. 

    The Marvell 88SS1092 is Marvell's second-generation NVMe SSD controller capable of PCIe 3.0 X 4 end points to provide full 4GB/s interface to the host and help remove performance bottlenecks. While the new controller advances a solid-state storage system to a more fully flash-optimized architectur