By Sandy Rodriguez, Corporate Staff Professional, Philanthropy, Marvell
Marvell is excited to be named “Fittest Firm” in the annual Silicon Valley Turkey Trot for the eighth consecutive year. Since 2016, Marvell has sponsored the Fittest Firm Competition and earned this title with the largest number of employee registrants in the largest firm category.
The biggest Thanksgiving Day race in America, the Silicon Valley Turkey Trot draws thousands of runners, joggers, and walkers to join in healthy activities on the holiday while contributing to the less fortunate in our community. Marvell had a total of 894 employees participate in the Turkey Trot this year. Since its inception in 2005, the event has donated $11.2 million and collected over 250,000 pounds of food for several local charities.
By Mary Gorges, Content Manager, Talent Branding, Marvell
At Marvell, we strive to create a culture of inclusion and belonging, where everyone feels welcomed, can share different perspectives, and work together to achieve a common goal. Veterans Day on Saturday, November 11 is an opportunity in the U.S. to honor the brave individuals who have selflessly served our country and to express our deep gratitude for their dedication and sacrifice.
In recognition of Veterans Day, let’s hear from military veteran Joseph Goodearly, who’s been at Marvell for 18 years in our Storage business. He shares his perspective on what this day means for him and the military lessons he learned that have helped him in his post-military career.
Join us in celebrating and honoring the contributions of our veterans and those currently serving in active duty around the world.
By Todd Owens, Director, Field Marketing, Marvell
Here at Marvell, we talk frequently to our customers and end users about I/O technology and connectivity. This includes presentations on I/O connectivity at various industry events and delivering training to our OEMs and their channel partners. Often, when discussing the latest innovations in Fibre Channel, audience questions will center around how relevant Fibre Channel (FC) technology is in today’s enterprise data center. This is understandable as there are many in the industry who have been proclaiming the demise of Fibre Channel for several years. However, these claims are often very misguided due to a lack of understanding about the key attributes of FC technology that continue to make it the gold standard for use in mission-critical application environments.
From inception several decades ago, and still today, FC technology is designed to do one thing, and one thing only: provide secure, high-performance, and high-reliability server-to-storage connectivity. While the Fibre Channel industry is made up of a select few vendors, the industry has continued to invest and innovate around how FC products are designed and deployed. This isn’t just limited to doubling bandwidth every couple of years but also includes innovations that improve reliability, manageability, and security.
By Mary Gorges, Talent Recruitment Content Lead, Marvell
Women increasingly look beyond their own teams for advocacy and help to step into leadership roles and activities beyond their day-to-day jobs.
Inclusion networks are a way for many companies to allow any employee to raise their hand and try on new roles. At Marvell, Women@Marvell is our first-ever inclusion network, or what many refer to as an Employee Resource Group (ERG). But why women first?
“At Marvell, we are focused on elevating the lives and careers of women, not only in our company, but in the broader industry and the communities where we live and work. Women are largely under-represented in the semiconductor industry and are a hidden wealth of innovation and leadership. Launching our women’s inclusion network is just one small step toward helping women thrive,” noted Janice Hall, EVP and Chief Human Resources Officer and Women@Marvell executive sponsor.
Women@Marvell saw 500 women and men join in its first week. It offers mentoring, meaningful conversations, support and ideas to help members actively seek development opportunities. Leaders often share their influence and open doors, and members help each other find their way in more challenging times of their careers, such as becoming a first-time mother.
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
The sheer volume of data traffic moving across networks daily is mind-boggling almost any way you look at it. During the past decade, global internet traffic grew by approximately 20x, according to the International Energy Agency. One contributing factor to this growth is the popularity of mobile devices and applications: Smartphone users spend an average of 5 hours a day, or nearly 1/3 of their time awake, on their devices, up from three hours just a few years ago. The result is incredible amounts of data in the cloud that need to be processed and moved. Around 70% of data traffic is east-west traffic, or the data traffic inside data centers. Generative AI, and the exponential growth in the size of data sets needed to feed AI, will invariably continue to push the curb upward.
Yet, for more than a decade, total power consumption has stayed relatively flat thanks to innovations in storage, processing, networking and optical technology for data infrastructure. The debut of PAM4 digital signal processors (DSPs) for accelerating traffic inside data centers and coherent DSPs for pluggable modules have played a large, but often quiet, role in paving the way for growth while reducing cost and power per bit.
Marvell at ECOC 2023
At Marvell, we’ve been gratified to see these technologies get more attention. At the recent European Conference on Optical Communication, Dr. Loi Nguyen, EVP and GM of Optical at Marvell, talked with Lightwave editor in chief, Sean Buckley, on how Marvell 800 Gbps and 1.6 Tbps technologies will enable AI to scale.
By Dr. Radha Nagarajan, Senior Vice President and Chief Technology Officer, Optical and Cloud Connectivity Group, Marvell
This article was originally published in Data Center Knowledge
People or servers?
Communities around the world are debating this question as they try to balance the plans of service providers and the concerns of residents.
Last year, the Greater London Authority told real estate developers that new housing projects in West London may not be able to go forward until 2035 because data centers have taken all of the excess grid capacity1. EirGrid2 said it won’t accept new data center applications until 2028. Beijing3 and Amsterdam have placed strict limits on new facilities. Cities in the southwest and elsewhere4, meanwhile, are increasingly worried about water consumption as mega-sized data centers can use over 1 million gallons a day5.
When you add in the additional computing cycles needed for AI and applications like ChatGPT, the outline of the conflict becomes more heated.
On the other hand, we know we can’t live without them. Modern society, with remote work, digital streaming and modern communications all depend on data centers. Data centers are also one of sustainability’s biggest success stories. Although workloads grew by approximately 10x in the last decade with the rise of SaaS and streaming, total power consumption stayed almost flat at around 1% to 1.5%6 of worldwide electricity thanks to technology advances, workload consolidation, and new facility designs. Try and name another industry that increased output by 10x with a relatively fixed energy diet?
By Loi Nguyen, Executive Vice President, Cloud Optics Business Group, Marvell
Some twenty years ago the concept of IP over Wavelength Division Multiplexing (WDM) was proposed as a way to simplify the optical infrastructures. In this vision, all optical networks are connected via point-to-point mesh networks with a router at the center. The concept was elegant, but never took off because the optical technology at the time was not able to keep up with the faster innovation cycle of CMOS, driven by Moore’s law. The larger form factor of WDM optics does not allow them to be directly plugged into a router port. Adopting a larger form factor on the router in order to implement IP over WDM in a massive scale would be prohibitively expensive.
For routers to interface with the networks, a “transponder” is needed, which is connected to a router via short-reach optics on one side and WDM optics to the network on the other. The market for transponders grew quickly to become a multi-billion-dollar market.
A Star is Born
About 10 years ago, I was building a team at Inphi, where I was a co-founder, to further develop a nascent technology called silicon photonics. SiPho, as it’s called, leverages commercial CMOS foundries to develop photonics integrated circuits (PIC) that integrate hundreds of components ranging from high-speed modulators and detectors to passive devices such as couplers, waveguides, monitoring diodes, attenuators and so on. We were looking for ideas and customers to bring silicon photonics to the marketplace.
Fortunately, good technology and market need found one another. A group of Microsoft executives had been considering IP over WDM to launch a new concept of “distributed data centers,” in which multiple data centers in a region are connected by high speed WDM optics using the same form factor as shorter reach “client optics” used in switches and routers. By chance, we met at ECOC 2013 in London for the initial discussion, and then some months later, a product that enabled IP over WDM at cloud scale was born.
By Liz Du, Director, Marvell
Employees have demonstrated remarkable resiliency over the past few years, adjusting to new ways of working during an unprecedented global crisis. It forever changed how employees think about their workplaces, careers, and personal lives, giving them new perspectives on what really matters.
As a company, Marvell has also been evolving, tapping into employee experiences, and listening intently to their opinions on what makes Marvell a great place to work and how we can continuously improve. We were pleased to consistently hear appreciation for being able to experiment with cutting-edge technology, develop skills, and build products that help connect people all over the world.
But another pattern took shape in their commentary as they emphasized the importance of teammates and leaders, uttering words like transparency, integrity, and respect. We believe these adjectives are no coincidence. They are interrelated characteristics Marvell has purposefully cultivated to help employees thrive.
By Raghib Hussain, President, Products and Technologies, Marvell
To our Valued Customers:
Recently, reports have surfaced alleging that certain Cavium products included a “backdoor” for the National Security Agency (NSA). We assure you that neither Cavium nor Marvell have ever knowingly incorporated or retained any vulnerability or backdoor in our products.
Our products implement a suite of standards-based security algorithms like AES, 3DES, SHA etc. Prior to 2014, some of our software libraries included an algorithm for random number generation called Dual_EC_DRGB. This algorithm was one of four officially recommended at the time by the US National Institute for Standards and Technology (NIST) that our products implemented. In 2013, this algorithm was reported by the New York Times, The Guardian, and ProPublica to include a backdoor for the NSA. After we learned of the potential issue, Cavium removed this algorithm from its software libraries and has not included it in any product shipped since then.
Importantly, the Dual_EC_DRGB algorithm was included in some of Cavium’s software libraries for our chip-level products, but not in the chips themselves. As a result, while Cavium provided this algorithm (among many), the ultimate choice and control over the algorithms being used was managed by the equipment vendors integrating our products into their system level products. Many companies, not just Cavium, implemented the NIST standard algorithms including this algorithm. In fact, according to NIST’s historical validation data, approximately 80 different products with semiconductors from different vendors implemented this algorithm in some combination of hardware, software, and firmware before it was removed.
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell
When you hear people refer to cars as “data centers on wheels,” they’re usually thinking about how an individual experiences enhanced digital capabilities in a car, such as streaming media on-demand or new software-defined services for enhancing the driving experience.
But there’s an important implication lurking behind the statement. For cars to take on tasks that require data center-like versatility, they need to be built like data centers. Automakers in conjunction with hardware makers and software developers are going to have to develop a portfolio of highly specialized technologies that work together, based around similar architectural concepts, to deliver the capabilities needed for the software-defined vehicle while at the same time keeping power and cost to a minimum. It’s not an easy balancing act.
Which brings us to the emergence of a new category of products for the zonal architecture, specifically zonal and the associated automotive central Ethernet switches. Today’s car networks are built around domain localized networks: speakers, video screens and other infotainment devices link to the infotainment ECU, while powertrain and brakes are part of the body domain, and ADAS domain is based on the sensors and high-performance processors. Bandwidth and security can be form-fitted to the application.
By Samuel Liu, Senior Director, Product Line Management, Marvell
Digital technology has what you could call a real estate problem. Hyperscale data centers now regularly exceed 100,000 square feet in size. Cloud service providers plan to build 50 to 100 edge data centers a year and distributed applications like ChatGPT are further fueling a growth of data traffic between facilities. Similarly, this explosive surge in traffic also means telecommunications carriers need to upgrade their wired and wireless networks, a complex and costly undertaking that will involve new equipment deployment in cities all over the world.
Weaving all of these geographically dispersed facilities into a fast, efficient, scalable and economical infrastructure is now one of the dominant issues for our industry.
Pluggable modules based on coherent digital signal processors (CDSPs) debuted in the last decade to replace transponders and other equipment used to generate DWDM compatible optical signals. These initial modular products didn’t offer the same performance as the incumbent solutions, and could only be deployed in limited use cases. These early modules, with their large form factors, had performance limitations and did not support the required high-density data transmission. Over time, advances in technology optimized the performance of pluggable modules, and CDSP speeds grew from 100 to 200 and 400 Gbps. Continued innovation, and the development of an open ecosystem, helped expand the potential applications.
By Liz Du, Director, Marvell
We take pride in creating cutting-edge technology, but what truly stands out about the employee experience at Marvell is that our people have the freedom to take on compelling challenges and find new ways forward while continuously advancing their expertise. The idea of ownership is central to who we are, and it is the cornerstone of how Marvell operates. Every day, our employees take ownership by improving execution, fueling collaboration, and shaping outcomes. They solve problems and ignite breakthroughs, proving that the impossible isn’t. And they own what’s next–for themselves and their teams, for Marvell, and for the customers and industries built on our innovation.
Watch the video to hear from our team members around the world as they speak in their own words about what ownership means to them.
By Rebecca O'Neill, Global Head of ESG, Marvell
I’m thrilled to announce that we published our second annual Environmental, Social and Governance (ESG) report today. I’m proud to share the progress we have made across the company. ESG intersects with all parts of the business, so it really is a team effort. This past year, everyone stepped up to improve performance in their key areas and launch new initiatives that strengthen Marvell’s business and impact.
The report provides the latest updates toward meeting our goals during fiscal year 2023. For instance, we submitted a Science Based Target for external validation, carried out a climate scenario analysis, conducted a human rights impact assessment (ahead of schedule), and maintained an employee satisfaction score above the industry benchmark.
By Janice Hall, Executive Vice President, Chief Human Resources Officer, Marvell
At Marvell, we are focused on elevating the lives and careers of women, not only in our company, but in the broader industry and the communities where we live and work. One way we do this is through the women’s inclusion network, Women@Marvell, where I am proud to be its executive sponsor. We recently held a special company-wide event that celebrated our women and highlighted the need for support from our advocates. At this event, we gathered the regional leaders of Women@Marvell for a panel discussion focused on some of the challenges that women still face as professionals in the semiconductor industry, and what we can do to help propel women forward. I was inspired to hear the personal stories and shared experiences from our women leaders around the world.
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
Powered by the Marvell® Bravera™ SC5 controller, Memblaze developed the PBlaze 7 7940 GEN5 SSD family, delivering an impressive 2.5 times the performance and 1.5 times the power efficiency compared to conventional PCIe 4.0 SSDs and ~55/9us read/write latency1. This makes the SSD ideal for business-critical applications and high-performance workloads like machine learning and cloud computing. In addition, Memblaze utilized the innovative sustainability features of Marvell’s Bravera SC5 controllers for greater resource efficiency, reduced environmental impact and streamlined development efforts and inventory management.
By Liz Du, Director, Marvell
There's a certain energy new employees bring to the workplace. Everything about starting a job is fresh and exciting: colleagues, offices, policies. The pressure to perform and succeed in an unfamiliar environment can also be intense.
At Marvell, we're going all in on the potential of our early career talent. We believe their perspectives can help drive success from day one. This is why hands-on learning and mentorship are embraced at all levels of the organization.
By Willard Tu, Associate VP, Product Marketing – Automotive Compute, Marvell
Marvell is excited to announce that we’ve joined the automotive chiplet initiative coordinated by imec, a world-leading research and innovation hub in nanoelectronics and digital technologies. Imec has formed an informal ecosystem of leading companies from multiple automotive industry segments to address the challenge of bringing multi-chiplet compute modules to the automotive market.
The goal of imec’s automotive chiplet initiative is to address the design challenges that arise from ever-increasing data movement, processing, storage and security requirements. These demands complicate the automotive manufacturers’ desire for scalable performance to address different vehicle classes, while reducing costs and development time and ensuring consistent quality, reliability and safety.
And these demands will be made even more intense by the coming era of super-human sensing. The fusion of data from multi-spectral cameras (visible and infrared), radar and LiDAR will enable “vision” beyond human capability. Such sensor fusion will be a critical requirement for safe autonomous driving.
By Liz Du, Director, Marvell
Thousands of team members from around the world all have a hand in designing innovative semiconductor technology that moves, stores, processes and secures the world’s data. What keeps this ever-evolving global organization grounded and in sync? It’s our people and how they uphold our core behaviors: act with integrity and treat everyone with respect; execute with thoroughness and rigor; innovate to solve customer needs; and help others achieve their objectives. These behaviors shape our belief that how we do things is just as important as what we do, and they guide all of our interactions – both with each other and with our partners and customers. To illustrate how these behaviors come to life, let’s hear from Marvell employees directly.
By Suhas Nayak, Senior Director of Solutions Marketing, Marvell
In the world of artificial intelligence (AI), where compute performance often steals the spotlight, there's an unsung hero working tirelessly behind the scenes. It's something that connects the dots and propels AI platforms to new frontiers. Welcome to the realm of optical connectivity, where data transfer becomes lightning-fast and AI's true potential is unleashed. But wait, before you dismiss the idea of optical connectivity as just another technical detail, let's pause and reflect. Think about it: every breakthrough in AI, every mind-bending innovation, is built on the shoulders of data—massive amounts of it. And to keep up with the insatiable appetite of AI workloads, we need more than just raw compute power. We need a seamless, high-speed highway that allows data to flow freely, powering AI platforms to conquer new challenges.
In this post, I’ll explain the importance of optical connectivity, particularly the role of DSP-based optical connectivity, in driving scalable AI platforms in the cloud. So, buckle up, get ready to embark on a journey where we unlock the true power of AI together.
By Todd Owens, Field Marketing Director, Marvell
While Fibre Channel (FC) has been around for a couple of decades now, the Fibre Channel industry continues to develop the technology in ways that keep it in the forefront of the data center for shared storage connectivity. Always a reliable technology, continued innovations in performance, security and manageability have made Fibre Channel I/O the go-to connectivity option for business-critical applications that leverage the most advanced shared storage arrays.
A recent development that highlights the progress and significance of Fibre Channel is Hewlett Packard Enterprise’s (HPE) recent announcement of their latest offering in their Storage as a Service (SaaS) lineup with 32Gb Fibre Channel connectivity. HPE GreenLake for Block Storage MP powered by HPE Alletra Storage MP hardware features a next-generation platform connected to the storage area network (SAN) using either traditional SCSI-based FC or NVMe over FC connectivity. This innovative solution not only provides customers with highly scalable capabilities but also delivers cloud-like management, allowing HPE customers to consume block storage any way they desire – own and manage, outsource management, or consume on demand.HPE GreenLake for Block Storage powered by Alletra Storage MP
At launch, HPE is providing FC connectivity for this storage system to the host servers and supporting both FC-SCSI and native FC-NVMe. HPE plans to provide additional connectivity options in the future, but the fact they prioritized FC connectivity speaks volumes of the customer demand for mature, reliable, and low latency FC technology.
By Michael Kanellos, Head of Influencer Relations, Marvell
AI’s growth is unprecedented from any angle you look at it. The size of large training models is growing 10x per year. ChatGPT’s 173 million plus users are turning to the website an estimated 60 million times a day (compared to zero the year before.). And daily, people are coming up with new applications and use cases.
As a result, cloud service providers and others will have to transform their infrastructures in similarly dramatic ways to keep up, says Chris Koopmans, Chief Operations Officer at Marvell in conversation with Futurum’s Daniel Newman during the Six Five Summit on June 8, 2023.
“We are at the beginning of at least a decade-long trend and a tectonic shift in how data centers are architected and how data centers are built,” he said.
The transformation is already underway. AI training, and a growing percentage of cloud-based inference, has already shifted from running on two-socket servers based around general processors to systems containing eight more GPUs or TPUs optimized to solve a smaller set of problems more quickly and efficiently.
By Noam Mizrahi, Executive Vice President, Chief Technology Officer, Marvell
Originally published in Embedded
ChatGPT has fired the world’s imagination about AI. The chatbot can write essays, compose music, and even converse in different languages. If you’ve read any ChatGPT poetry, you can see it doesn’t pass the Turing Test yet, but it’s a huge leap forward from what even experts expected from AI just three months ago. Over one million people became users in the first five days, shattering records for technology adoption.
The groundswell also strengthens arguments that AI will have an outsized impact on how we live—with some predicting AI will contribute significantly to global GDP by 2030 by fine-tuning manufacturing, retail, healthcare, financial systems, security, and other daily processes.
But the sudden success also shines light on AI’s most urgent problem: our computing infrastructure isn’t built to handle the workloads AI will throw at it. The size of AI networks grew by 10x per year over the last 5 years. By 2027 one in five Ethernet switch ports in data centers will be dedicated to AI, ML and accelerated computing.
By Liz Du, Director, Marvell
Great news! Marvell is excited to share that the company has been recognized in the Best Places to Work 2023 largest company category at the annual awards program produced by the San Francisco Business Times and Silicon Valley Business Journal. Marvell’s inclusion on the annual list was determined by survey results provided voluntarily by company employees.
“This recognition reflects our focus on cultivating and continuously nurturing a culture where wellness and inclusivity are prioritized,” said Janice Hall, executive vice president, chief human resources officer at Marvell. “We have an incredible, diverse team at Marvell and I’m very proud of our commitment to creating an environment where everyone is supported to do work that matters, while given the opportunity to take ownership of their careers.”
By Rebecca O'Neill, Global Head of ESG, Marvell
Marvell is committed to being a good steward of the environment and we are excited to mark the 2023 Earth Day on April 22 with a week-long celebration. Our employees care about protecting our planet and want to do their part both at work and beyond.
We look forward to engaging our employees in virtual and in-person events throughout the week to enable them to better protect the environment, such as:
The Big 6 Sustainability Webinar:
We are kicking off the week with a global webinar with special guest speakers from Carbonauts, a company that teaches people how to reduce their carbon footprints and integrate sustainability into their day-to-day lives. Employees will learn the six biggest levers for reducing their carbon footprints. Carbonauts is a frequent guest at sustainability conferences and podcasts and the company’s clients include many Fortune 500 companies like Amazon, Chanel, Toyota, AT&T, Warner Brothers, and Netflix, to name a few. Carbonauts is on a mission to get society to the tipping point of behavior change to make sustainable living a way of life for all.
By Liz Du, Director, Marvell
It's not uncommon for employees in the technology industry to switch companies often, in a bid to expand their skills, find new teams, and take on more interesting challenges. Sameer Vaidya, Director of Validation Engineering, was no different. He only planned to stay six months at Marvell before moving on to continue growing his career. But the company's collaborative culture and passion for innovation hooked him. This, along with the authority and support he received from Marvell’s leadership along the way, gave Sameer the confidence he needed to be successful. Two decades later, Sameer displays that same excitement when discussing his Marvell colleagues. "The people that I get to work with on a day-to-day basis, it really makes a difference," Sameer said.
For Sameer, collaboration is more than a buzzword in a company values statement. It's an essential ingredient to bringing novel ideas to life. "Without this collaboration, if we can't execute, we can't be successful," he said.
By Bill Hagerstrand, Security Solutions BU, Marvell
Time to grab a cup of coffee, as I describe how the transition towards open, disaggregated, and virtualized networks – also known as cloud-native 5G – has created new challenges in an already-heightened 4G-5G security environment.
5G networks move, process and store an ever-increasing amount of sensitive data as a result of faster connection speeds, mission-critical nature of new enterprise, industrial and edge computing/AI applications, and the proliferation of 5G-connected IoT devices and data centers. At the same time, evolving architectures are creating new security threat vectors. The opening of the 5G network edge is driven by O-RAN standards, which disaggregates the radio units (RU), front-haul, mid-haul, and distributed units (DU). Virtualization of the 5G network further disaggregates hardware and software and introduces commodity servers with open-source software running in virtual machines (VM’s) or containers from the DU to the core network.
As a result, these factors have necessitated improvements in 5G security standards that include additional protocols and new security features. But these measures alone, are not enough to secure the 5G network in the cloud-native and quantum computing era. This blog details the growing need for cloud-optimized HSMs (Hardware Security Modules) and their many critical 5G use cases from the device to the core network.
By Rebecca O'Neill, Global Head of ESG, Marvell
Marvell recently released its inaugural Environmental, Social and Governance (ESG) Report, detailing the company's goals, strategic approach, and commitment to building a sustainable future. Marvell's approach is based on the areas of greatest impact and opportunity for our company: integrating environmental and social considerations into our product design and responsibly managing the impacts of our supply chain, while focusing on strategic ESG initiatives that are material to our financial performance and long-term value creation.
Part of our overarching commitment to address ESG topics involves continuous improvement. That’s why Marvell has set a range of goals that showcase key areas of focus for our business, now and in the future.
By Liz Du, Director, Marvell
The women of Marvell have always been a source of inspiration and innovation, but Women's History Month in March prompts us to shine the spotlight on what they mean to the company. This is especially important in engineering and technology where women are traditionally underrepresented.
Marvell is trying to change that through Women@Marvell and the Tech Women mentoring program. These programs offer unique opportunities to learn skills, get guidance and support, find professional mentors, and advance in their careers.
Lyndsi Parker, a Senior Director in Marvell's Central Engineering group, serves as mentor in the Tech Women program and connects with colleagues who share her passion for helping women succeed in the industry. According to Lyndsi, the organization allows women to bring their thoughts, feelings, concerns, and ideas out into the open. "We've been able to look specifically at what is the experience of women in our central engineering group," she said. "What do women experience, what do they see? Do they feel like they are heard? Are there improvements that we need to be making?"
By Kevin Koski, Product Marketing Director, Marvell
Last week, Marvell introduced Nova™, its latest, fourth generation PAM4 DSP for optical modules. It features breakthrough 200G per lambda optical bandwidth, which enables the module ecosystem to bring to market 1.6 Tbps pluggable modules. You can read more about it in the press release and the product brief.
In this post, I’ll explain why the optical modules enabled by Nova are the optimal solution to high-bandwidth connectivity in artificial intelligence and machine learning systems.
Let’s begin with a look into the architecture of supercomputers, also known as high-performance computing (HPC).
Historically, HPC has been realized using large-scale computer clusters interconnected by high-speed, low-latency communications networks to act as a single computer. Such systems are found in national or university laboratories and are used to simulate complex physics and chemistry to aid groundbreaking research in areas such as nuclear fusion, climate modeling and drug discovery. They consume megawatts of power.
The introduction of graphics processing units (GPUs) has provided a more efficient way to complete specific types of computationally intensive workloads. GPUs allow for the use of massive, multi-core parallel processing, while central processing units (CPUs) execute serial processes within each core. GPUs have both improved HPC performance for scientific research purposes and enabled a machine learning (ML) renaissance of sorts. With these advances, artificial intelligence (AI) is being pursued in earnest.
By Amit Sanyal, Senior Director, Product Marketing, Marvell
If you’re one of the 100+ million monthly users of ChatGPT—or have dabbled with Google’s Bard or Microsoft’s Bing AI—you’re proof that AI has entered the mainstream consumer market.
And what’s entered the consumer mass-market will inevitably make its way to the enterprise, an even larger market for AI. There are hundreds of generative AI startups racing to make it so. And those responsible for making these AI tools accessible—cloud data center operators—are investing heavily to keep up with current and anticipated demand.
Of course, it’s not just the latest AI language models driving the coming infrastructure upgrade cycle. Operators will pay equal attention to improving general purpose cloud infrastructure too, as well as take steps to further automate and simplify operations.
To help operators meet their scaling and efficiency objectives, today Marvell introduces Teralynx® 10, a 51.2 Tbps programmable 5nm monolithic switch chip designed to address the operator bandwidth explosion while meeting stringent power- and cost-per-bit requirements. It’s intended for leaf and spine applications in next-generation data center networks, as well as AI/ML and high-performance computing (HPC) fabrics.
A single Teralynx 10 replaces twelve of the 12.8 Tbps generation, the last to see widespread deployment. The resulting savings are impressive: 80% power reduction for equivalent capacity.
By Kim Markle, Director Influencer Relations, Marvell
Wind River and Marvell have collaborated to create an open, virtualized Radio Access Network (vRAN) solution for communication service providers (CSPs) that offers cloud scalability with the features, performance, and energy efficiency of established 5G networks. The collaboration integrates two complementary, industry-leading technologies—the Marvell® OCTEON® 10 Fusion 5G baseband processor and the Wind River Studio cloud software—to provide the carrier ecosystem a deployment-ready vRAN platform built on technologies that are widely proven in 5G networks and data centers.
CSPs aim to leverage established IT infrastructure for enhanced service agility and streamlined DevOps in the cloud-native RAN. Marvell's OCTEON 10 Fusion processor supports these goals with programmability based on open-source, industry-standard interfaces and integration with leading cloud software platforms such as Wind River Studio.
To ensure open-source distribution of Wind River Studio software, OCTEON 10 Fusion software drivers are being used by StarlingX, an open development and integration project. Marvell’s drivers enable Wind River Studio software to communicate with and control the OCTEON 10 Fusion processor. This facilitates developer access to an optimized vRAN system that offers new options for CSPs and helps to expand the carrier ecosystem of RAN and data center hardware and software suppliers, as well as system integrators.
By Johnny Truong, Senior Manager, Public Relations, Marvell
To address the growing demands of 5G applications (and beyond), networks are not only expected, but required, to offer features, performance, and capacity competitive with traditional RAN while improving energy efficiency and cost-savings.
Watch this video of Dennis Hoffman, SVP and GM of Dell’s Telecom Systems Business discuss how Dell and Marvell will continue building on its strategic partnership in pursuit of truly open mobile networks and how they’re bringing the power of Layer 1 Acceleration technology to the vRAN architecture with Marvell’s OCTEON® 10 Fusion processor, designed for 5G RAN.
By Peter Carson, Senior Director Solutions Marketing, Marvell and Tosin Olopade, Technical Product Line Manager, VMware and Padma Sudarsan, Director of Engineering, RAN Architecture, VMware
VMware, a pioneer in assisting communication service providers (CSPs) in transforming their networks, is partnering up with Marvell, a leading provider of data infrastructure semiconductor solutions to improve RAN performance and ROI. This collaboration provides solutions that enable CSPs to meet the demands of 5G’s increased capacity and use cases, optimizing the revenue and efficiency of each RAN site.
RAN sites worldwide are targeted for new technology deployment, where traditional, custom-made equipment is being replaced with servers adapted from data centers. This transformation to virtualized RAN and Open RAN, which replaces hardware with software, is driving the modernization of RAN sites worldwide. This allows CSPs to select servers and software based on their strategic goals, enabling them to offer unique services compared to their competitors.
However, 5G RAN workloads, particularly Layer 1 (L1), are far more complex and latency-sensitive than the applications that general purpose CPUs have been designed to address. The load on even the most robust CPUs in the case of 5G RAN virtualization can be demanding. The rapid increase in 5G network speeds, reaching multi-gigabit-per-second, and the management of software-centric RAN Distributed Units (DUs) has resulted in rising energy consumption and cooling demands. This leads to increased costs, such as higher electricity bills, and may compromise CSPs’ plans to monetize their RAN investments.
By Kant Deshpande, Director, Product Management, Marvell
Disaggregation is the future
Disaggregation—the decoupling of hardware and software—is arguably the future of networking. Disaggregation lets customers select best-of-breed hardware and software, enabling rapid innovation by separating the hardware and software development paths.
Disaggregation started with server virtualization and is being adapted to storage and networking technology. In networking, disaggregation promises that any networking operating system (NOS) can be integrated with any switch silicon. Open source-standards like ONIE allow a networking switch to load and install any NOS during the boot process.
SONiC: the Linux of networking OS
Software for Open Networking in Cloud (SONiC) has been gaining momentum as the preferred open-source cloud-scale network operating system (NOS).
In fact, Gartner predicts that by 2025, 40% of organizations that operate large data center networks (greater than 200 switches) will run SONiC in a production environment.[i] According to Gartner, due to readily expanding customer interest and a commercial ecosystem, there is a strong possibility SONiC will become analogous to Linux for networking operating systems in next three to six years.
By Amit Sanyal, Senior Director, Product Marketing, Marvell
Data centers are arguably the most important buildings in the world. Virtually everything we do—from ordinary business transactions to keeping in touch with relatives and friends—is accomplished, or at least assisted, by racks of equipment in large, low-slung facilities.
And whether they know it or not, your family and friends are causing data center operators to spend more money. But it’s for a good cause: it allows your family and friends (and you) to continue their voracious consumption, purchasing and sharing of every kind of content—via the cloud.
Of course, it’s not only the personal habits of your family and friends that are causing operators to spend. The enterprise is equally responsible. They’re collecting data like never before, storing it in data lakes and applying analytics and machine learning tools—both to improve user experience, via recommendations, for example, and to process and analyze that data for economic gain. This is on top of the relentless, expanding adoption of cloud services.
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
Marvell has been honored with two 2023 Lightwave Innovation Reviews high scores, validating its leadership in PAM4 DSP solutions for data infrastructure. The two awards reflect the industry’s recognition of Marvell’s recent best-in-class innovations to address the growing bandwidth and interconnect needs of cloud data center networks. An esteemed and experienced panel of third-party judges from the optical communications community recognized Marvell as a high-scoring honoree.
“On behalf of the Lightwave Innovation Reviews, I would like to congratulate Marvell on their high-scoring honoree status,” said Lightwave Editorial Director, Stephen Hardy. “This competitive program allows Lightwave to celebrate and recognize the most innovative products impacting the optical communications community this year.”
Marvell was recognized for the Marvell® Alaska® A PAM4 DSP Family for Active Electrical Cables (AECs) and the Marvell® Spica™ Gen 2 800G PAM4 Electro-Optics Platform, both in the Data Center Interconnect Platforms category. Key features of these 2023 Lightwave Innovation Reviews honorees include:
By Rebecca O'Neill, Global Head of ESG, Marvell
Marvell is committed to fostering an inclusive, diverse, and engaging workplace to fully leverage the perspectives and contributions of every individual at the company. We strive to create an environment where people feel fulfilled, inspired, and motivated to learn and grow, personally and professionally.
What Inclusion and Diversity Means to Marvell
Inclusion means focusing on respect, acceptance, and the ability to appreciate a culture-add approach where we can all bring our full authentic selves to work, every day.
To us, diversity means valuing differences. We value the unique perspectives and experiences of every employee. It is this uniqueness that every employee brings to the company, which is powerful and provides us with a competitive advantage.
Our Strategy
We have developed a strategy focused on four Inclusion & Diversity business outcomes:
By Johnny Truong, Senior Manager, Public Relations, Marvell
The Marvell® OCTEON® 10 DPU was awarded the 2022 Analysts’ Choice Awards for “Best Embedded Processor” in TechInsight’s Microprocessor Report.
One of the longest-running award programs of its kind, it salutes the top semiconductor offerings in the categories of data center, PC, smartphone, and embedded processors, as well as processor IP cores and related emerging technologies. Winning products were selected for superior features, performance, power, and cost in the context of the company’s target applications and competition.
The OCTEON 10 DPU--the world’s first Arm Neoverse N2-based processor in 5nm--is the latest version of the OCTEON processor family. By accelerating wireless, networking, storage, security and other specialized workloads, OCTEON 10 enables best-in-class features, performance, energy efficiency, and total cost of ownership for carriers, cloud providers, and enterprises.
The OCTEON processor family is used by four of the top six wireless infrastructure OEMs, in nine of the top 10 firewall appliances, and by other major networking OEMs.
TechInsight Chief Analyst Joseph Byrne said: “Processors for communications infrastructure have long pushed the leading edge for embedded products. Marvell’s feat shows that succeeding in the high-performance-embedded market doesn’t require leveraging smartphone or PC/server technology.”
Microprocessor Report subscribers can access commentary on the winners, details on what sets them apart, and other nominees in each category here.
To learn more about Marvell’s latest addition to the OCTEON DPU family, visit us at MWC 2023 in Barcelona at booth 2F34 in Hall 2.
By Peter Carson, Senior Director Solutions Marketing, Marvell
The rise of fully open and optimized vRAN platforms based on globally-proven 5G layer one hardware accelerators, led by Marvell, has given Open RAN operators the industry’s first no-compromise vRAN solution. Unlike the so-called “look-aside” general-purpose alternative, the Marvell architecture is host server CPU agnostic and uniquely enables (1) RAN software programmability, based on open source, industry standard interfaces and (2) inline hardware acceleration that delivers feature, performance and power parity as compared to existing 5G networks -- absolutely critical requirements of mobile operators. Listen to what leading operators are saying about inline vRAN accelerators.
By Rebecca O'Neill, Global Head of ESG, Marvell and Sandy Rodriguez, Sr. Compliance Analyst, Marvell
At Marvell, we are committed to giving back to the communities where we live and work. Our community engagement focuses on three key pillars:
The company will also match employee donations up to $500 per calendar year when an employee makes a donation to a nonprofit aligned with our philanthropic pillars. In addition, we launched a volunteer time off program, offering employees up to three days or 24 hours of paid time off per year to volunteer for causes they care about and support organizations working in our pillar areas. We aim to have at least 20% of our employees participate in our volunteer time off and employee match programs. Both of these endeavors are offered globally.
By Zvi Shmilovici Leib, Distinguished Engineer, Marvell
Industry 4.0 is redefining how industrial networks behave and how they are operated. Industrial networks are mission-critical by nature and have always required timely delivery and deterministic behavior. With Industry 4.0, these networks are becoming artificial intelligence-based, automated and self-healing, as well. As part of this evolution, industrial networks are experiencing the convergence of two previously independent networks: information technology (IT) and operational technology (OT). Time Sensitive Networking (TSN) is facilitating this convergence by enabling the use of Ethernet standards-based deterministic latency to address the needs of both the IT and OT realms.
However, the transition to TSN brings new challenges and requires fresh solutions for industrial network visibility. In this blog, we will focus on the ways in which visibility tools are evolving to address the needs of both IT managers and those who operate the new time-sensitive networks.
Networks are at the heart of the industry 4.0 revolution, ensuring nonstop industrial automation operation. These industrial networks operate 24/7, frequently in remote locations with minimal human presence. The primary users of the industrial network are not humans but, rather, machines that cannot “open tickets.” And, of course, these machines are even more diverse than their human analogs. Each application and each type of machine can be considered a unique user, with different needs and different network “expectations.”
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Heinlein, Chief Marketing Officer, Sonatus and Simon Edelhaus, VP SW, Automotive Business Unit, Marvell
The software-defined vehicle (SDV) is one of the newest and most interesting megatrends in the automotive industry. As we discussed in a previous blog, the reason that this new architectural—and business—model will be successful is the advantages it offers to all stakeholders:
What is a software-defined vehicle? While there is no official definition, the term reflects the change in the way software is being used in vehicle design to enable flexibility and extensibility. To better understand the software-defined vehicle, it helps to first examine the current approach.
Today’s embedded control units (ECUs) that manage car functions do include software, however, the software in each ECU is often incompatible with and isolated from other modules. When updates are required, the vehicle owner must visit the dealer service center, which inconveniences the owner and is costly for the manufacturer.
By Willard Tu, Associate VP, Product Marketing – Automotive Compute, Marvell
I’m excited to share that Marvell is now a member of two leading automotive technology organizations: the Scalable Open Architecture for Embedded Edge (SOAFEE) and the Autoware Foundation. Marvell’s participation in these organizations’ initiatives demonstrates its continued focus and investment in the automotive market. The new memberships follow the company’s 2021 announcement of its Brightlane™ automotive portfolio, and reflect Marvell’s expanding automotive silicon initiative.
SOAFEE, founded by Arm, is an industry-led collaboration defined by automakers, semiconductor suppliers, open source and independent software vendors, and cloud technology leaders. The collaboration intends to deliver a cloud-native architecture enhanced for mixed-criticality automotive applications with corresponding open-source reference implementations to enable commercial and non-commercial offerings.
As a member of SOAFEE, Marvell will access the SOAFEE architecture standards to help streamline development from cloud to deployment at the vehicle. This will enable faster time to market for the Marvell Brightlane automotive portfolio.
By Johnny Truong, Senior Manager, Public Relations, Marvell
Driving the industry's largest standards-based ecosystem, the Marvell Deneb CDSP enables disaggregation which is critical for carriers to lower their CAPEX and OPEX as they increase network capacity. This recognition underscores Marvell’s success in bringing leading-edge density and performance optimization advantages to carrier networks.
In its 18th year, the Leading Lights is Light Reading’s flagship awards program which recognizes top companies and executives for their outstanding achievements in next-generation communications technology, applications, services, strategies, and innovations.
Visit the Light Reading blog for a full list of categories, finalists and winners.
By Kishore Atreya, Director of Product Management, Marvell
Recently the Linux Foundation hosted its annual ONE Summit for open networking, edge projects and solutions. For the first time, this year’s event included a “mini-summit” for SONiC, an open source networking operating system targeted for data center applications that’s been widely adopted by cloud customers. A variety of industry members gave presentations, including Marvell’s very own Vijay Vyas Mohan, who presented on the topic of Extensible Platform Serdes Libraries. In addition, the SONiC mini-summit included a hackathon to motivate users and developers to innovate new ways to solve customer problems.
So, what could we hack?
At Marvell, we believe that SONiC has utility not only for the data center, but to enable solutions that span from edge to cloud. Because it’s a data center NOS, SONiC is not optimized for edge use cases. It requires an expensive bill of materials to run, including a powerful CPU, a minimum of 8 to 16GB DDR, and an SSD. In the data center environment, these HW resources contribute less to the BOM cost than do the optics and switch ASIC. However, for edge use cases with 1G to 10G interfaces, the cost of the processor complex, primarily driven by the NOS, can be a much more significant contributor to overall system cost. For edge disaggregation with SONiC to be viable, the hardware cost needs to be comparable to that of a typical OEM-based solution. Today, that’s not possible.
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and Mark Davis, Senior Director, Solutions Marketing, Marvell
In the blog, Back to the Future – Automotive network run at speed of 10Gbps, we discussed the benefits and advantages of zonal architecture and why OEMs are adopting it for their next-generation vehicles. One of the biggest advantages of zonal architecture is its ability to reduce the complexity, cost and weight of the cable harness. In another blog, Ethernet Camera Bridge for Software-Defined Vehicles, we discussed the software-defined vehicle, and how using Ethernet from end-to-end helps to make that vehicle a reality.
While in the near future most devices in the car will be connected through zonal switches, cameras are the exception. They will continue to connect to processors over point-to-point protocol (P2PP) links using proprietary networking protocols such as low-voltage differential signaling (LVDS), Maxim’s GMSL or TI’s FPD-Link.
By Reza Eltejaein, Director, Product Marketing, Marvell
Manufacturers, power utilities and other industrial companies stand to gain the most in digital transformation. Manufacturing and construction industries account for 37 percent of total energy used globally*, for instance, more than any other sector. By fine-tuning operations with AI, some manufacturers can reduce carbon emission by up to 20 percent and save millions of dollars in the process.
Industry, however, remains relatively un-digitized and gaps often exist between operational technology – the robots, furnaces and other equipment on factory floors—and the servers and storage systems that make up a company’s IT footprint. Without that linkage, organizations can’t take advantage of Industrial Internet of Things (IIoT) technologies, also referred to as Industry 4.0. Of the 232.6 million pieces of fixed industrial equipment installed in 2020, only 10 percent were IIoT-enabled.
Why the gap? IT often hasn’t been good enough. Plants operate on exacting specifications. Engineers and plant managers need a “live” picture of operations with continual updates on temperature, pressure, power consumption and other variables from hundreds, if not thousands, of devices. Dropped, corrupted or mis-transmitted data can lead to unanticipated downtime—a $50 billion year problem—as well as injuries, blackouts, and even explosions.
To date, getting around these problems has required industrial applications to build around proprietary standards and/or complex component sets. These systems work—and work well—but they are largely cut off from the digital transformation unfolding outside the factory walls.
The new Prestera® DX1500 switch family is aimed squarely at bridging this divide, with Marvell extending its modern borderless enterprise offering into industrial applications. Based on the IEEE 802.1AS-2020 standard for Time-Sensitive Networking (TSN), Prestera DX1500 combines the performance requirements of industry with the economies of scale and pace of innovation of standards-based Ethernet technology. Additionally, we integrated the CPU and the switch—and in some models the PHY—into a single chip to dramatically reduce power, board space and design complexity.
Done right, TSN will lower the CapEx and OpEx for industrial technology, open the door to integrating Industry 4.0 practices and simplify the process of bringing new equipment to market.
By Hari Parmar, Senior Principal Automotive System Architect, Marvell
“In your garage or driveway sits a machine with more lines of code than a modern passenger jet. Today’s cars and trucks, with an internet link, can report the weather, pay for gas, find a parking spot, route around traffic jams and tune in to radio stations from around the world. Soon they’ll speak to one another, alert you to sales as you pass your favorite stores, and one day they’ll even drive themselves.
While consumers may love the features, hackers may love them even more.”
The New York Times, March 18, 2021
Hacking used to be an arcane worry, the concern of a few technical specialists. But with recent cyberattacks on pipelines, hospitals and retail systems, digital attacks have suddenly been thrust into public consciousness, leading many to wonder: are cars at risk, too?
Not if Marvell can help it. As a leading supplier of automotive silicon, the company has been intensely focused on identifying and securing potential vulnerabilities before they can remotely compromise a vehicle, its driver or passengers.
Unfortunately, hacking cars isn’t just theoretical – in 2015, researchers on a laptop commandeered a Jeep Cherokee 10 miles away, shutting off power, blasting the radio, turning on the AC and making the windshield wipers go berserk. And today, seven years later, millions more cars – including most new vehicles – are connected to the cloud.
By Rebecca O'Neill, Global Head of ESG, Marvell
I am delighted to announce that Marvell is a Member of the new Semiconductor Climate Consortium. We have been active participants of the group over the past several months and are happy to share that the Climate Consortium is publicly launching today.
Why a Consortium?
Acknowledging that climate action is collective action, Marvell has joined the Semiconductor Climate Consortium to work collaboratively with other semiconductor companies that have also embarked on a carbon reduction journey, to accelerate climate solutions and drive progressive climate action within our industry value chain.
The Consortium is an initiative of SEMI, the industry association serving the global electronics design and manufacturing supply chain, and it brings together all parts of the semiconductor ecosystem, including manufacturers, equipment providers, and fabless solutions providers such as Marvell. Everyone has a role to play in advancing the industry’s progress on addressing climate change. The Consortium believes that by working together, member companies will bring collective knowledge and innovative technologies to do so much more than one company can do alone.
The Consortium recognizes the challenge of climate change and works to speed semiconductor industry value chain efforts to reduce greenhouse gas emissions, including through support of the Paris Agreement and related accords driving the 1.5°C pathway.
By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell
While age is just a number and so is new speed for Fibre Channel (FC), the number itself is often irrelevant and it’s the maturity that matters – kind of like a bottle of wine! Today as we make a toast to the data center and pop open (announce) the Marvell® QLogic® 2870 Series 64G Fibre Channel HBAs, take a glass and sip into its maturity to find notes of trust and reliability alongside of operational simplicity, in-depth visibility, and consistent performance.
Big words on the label? I will let you be the sommelier as you work through your glass and my writings.
By Gary Kotzur, CTO, Storage Products Group, Marvell and Jon Haswell, SVP, Firmware, Marvel
The nature of storage is changing much more rapidly than it ever has historically. This evolution is being driven by expanding amounts of enterprise data and the inexorable need for greater flexibility and scale to meet ever-higher performance demands.
If you look back 10 or 20 years, there used to be a one-size-fits-all approach to storage. Today, however, there is the public cloud, the private cloud, and the hybrid cloud, which is a combination of both. All these clouds have different storage and infrastructure requirements. What’s more, the data center infrastructure of every hyperscaler and cloud provider is architecturally different and is moving towards a more composable architecture. All of this is driving the need for highly customized cloud storage solutions as well as demanding the need for a comparable solution in the memory domain.
By Rebecca O'Neill, Global Head of ESG, Marvell
Today is Energy Efficiency Day. Energy, specifically the electricity consumption required to power our chips, is something that is top of mind here at Marvell. Our goal is to reduce power consumption of products with each generation for set capabilities.
Our products play an essential role in powering data infrastructure spanning cloud and enterprise data centers, 5G carrier infrastructure, automotive vehicles, and industrial and enterprise networking. When we design our products, we focus on innovative features that deliver new capabilities while also improving performance, capacity and security to ultimately improve energy efficiency during product use.
These innovations help make the world’s data infrastructure more efficient and, by extension, reduce our collective impact on climate change. The use of our products by our customers contributes to Marvell’s Scope 3 greenhouse gas emissions, which is our biggest category of emissions.
By Katie Maller, Senior Manager, Public Relations, Marvell
Building on our leadership in Ethernet camera bridge technology, Marvell is excited to work with OMNIVISION and to have been a part of their automotive demonstrations at the recent AutoSens Brussels event. OMNIVISION, a leading global developer of semiconductor solutions, partnered with Marvell to demonstrate its OX03F10 (image sensor) and OAX4000 (image signal processor) with our industry first multi-gigabit Ethernet camera bridge, the Marvell® Brightlane™ 88QB5224.
The combined solutions allow camera video that would otherwise be transported via point-to-point protocol to be encapsulated over Ethernet, thereby integrating cameras into the Ethernet-based in-vehicle network. The solutions work with both interior and exterior cameras and are ideal for SVS and other applications in which numerous cameras are utilized and the output of those cameras is used by multiple subsystems or zones.
“Ethernet is the foundation of the software-defined vehicle. By using the Ethernet camera bridge from our Brightlane automotive portfolio to connect cameras to the zonal Ethernet switch, the cameras are integrated into the end-to-end, in-vehicle network,” said Amir Bar-Niv, vice president of marketing for Marvell’s automotive business unit. “Standard Ethernet features such as security, switching, and synchronization are now available to the camera system, and a simple software update is all that’s required when porting the system from one automobile model to another. Shorter runs to the zonal switches reduce the cable cost and weight, as well.”
The demonstrations in the OMNIVISION booth were well received at AutoSens Brussels, an annual event that brings together leading engineers and technical experts from across the ADAS and autonomous vehicle supply chain.
To learn more about Marvell’s Ethernet Camera Bridge technology, also check out this blog.
By Amit Sanyal, Senior Director, Product Marketing, Marvell
SONiC (Software for Open Networking in the Cloud) has steadily gained momentum as a cloud-scale network operating system (NOS) by offering a community-driven approach to NOS innovation. In fact, 650 Group predicts that revenue for SONiC hardware, controllers and OSs will grow from around US$2 billion today to around US$4.5 billion by 2025.
Those using it know that the SONiC open-source framework shortens software development cycles; and SONiC’s Switch Abstraction Interface (SAI) provides ease of porting and a homogeneous edge-to-cloud experience for data center operators. It also speeds time-to-market for OEMs bringing new systems to the market.
The bottom line: more choice is good when it comes to building disaggregated networking hardware optimized for the cloud. Over recent years, SONiC-using cloud customers have benefited from consistent user experience, unified automation, and software portability across switch platforms, at scale.
As the utility of SONiC has become evident, other applications are lining up to benefit from this open-source ecosystem.
A SONiC Buffet: Extending SONiC to Storage
SONiC capabilities in Marvell’s cloud-optimized switch silicon include high availability (HA) features, RDMA over converged ethernet (RoCE), low latency, and advanced telemetry. All these features are required to run robust storage networks.
Here’s one use case: EBOF. The capabilities above form the foundation of Marvell’s Ethernet-Bunch-of-Flash (EBOF) storage architecture. The new EBOF architecture addresses the non-storage bottlenecks that constrain the performance of the traditional Just-a-Bunch-of-Flash (JBOF) architecture it replaces-by disaggregating storage from compute.
EBOF architecture replaces the bottleneck components found in JBOF - CPUs, DRAM and SmartNICs - with an Ethernet switch, and it’s here that SONiC is added to the plate. Marvell has, for the first time, applied SONiC to storage, specifically for services enablement, including the NVMeoFTM (NVM Express over Fabrics) discovery controller, and out-of-band management for EBOF, using Redfish® management. This implementation is in production today on the Ingrasys ES2000 EBOF storage solution. (For more on this topic, check out this, this, and this.)
Marvell has now extended SONiC NOS to enable storage services, thus bringing the benefits of disaggregated open networking to the storage domain.
OK, tasty enough, but what about compute?
How Would You Like Your Arm Prepared?
I prefer Arm for my control plane processing, you say. Why can’t I manage those switch-based processors using SONiC, too, you ask? You’re in luck. For the first time, SONiC is the OS for Arm-based, embedded control plane processors, specifically the control plane processors found on Marvell® Prestera® switches. SONiC-enabled Arm processing allows SONiC to run on lower-cost 1G systems, reducing the bill-of-materials, power, and total cost of ownership for both management and access switches.
In addition to embedded processors, with the OCTEON® family, Marvell offers a smorgasbord of Arm-based processors. These can be paired with Marvell switches to bring the benefits of the Arm ecosystem to networking, including Data Processing Units (DPUs) and SmartNICs.
By combining SONiC with Arm processors, we’re setting the table for the broad Arm software ecosystem - which will develop applications for SONiC that can benefit both cloud and enterprise customers.
The Third Course
So, you’ve made it through the SONiC-enabled switching and on-chip control processing courses but there’s something more you need to round out the meal. Something to reap the full benefit of your SONiC experience. PHY, of course. Whether your taste runs to copper or optical mediums; PAM or coherent modulation, Marvell provides a complete SONiC-enabled portfolio by offering SONiC with our (not baked) Alaska® Ethernet PHYs and optical modules built using Marvell DSPs. Room for Dessert?
Finally, by enabling SONiC across the data center and enterprise switch portfolio we’re able to bring operators the enhanced telemetry and visibility capabilities that are so critical to effective service-level validation and troubleshooting. For more information on Marvell telemetry capabilities, check out this short video:
The Drive Home
Disaggregation has lowered the barrier-to-entry for market participants - unleashing new innovations from myriad hardware and software suppliers. By making use of SONiC, network designers can readily design and build disaggregated data center and enterprise networks.
For its part, Marvell’s goal is simple: help realize the vision of an open-source standardized network operating system and accelerate its adoption.
By Rebecca O'Neill, Global Head of ESG, Marvell
Today is Zero Emissions Day, which was started to raise awareness of the need to address climate change by reducing greenhouse gas emissions.
Here at Marvell, we recognize that climate change represents an unprecedented challenge to our planet, society and economy. That’s why we are enhancing our climate strategy by setting a Science-Based Target (SBT) and putting ourselves on a path to net zero carbon emissions. Our SBT will be aligned with a 1.5°C climate scenario, supporting the goals of the Paris Agreement which is aimed at reducing the worst of climate change.
Our new ESG Report provides a snapshot of our company’s greenhouse gas emissions:
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
Flash Memory Summit (FMS), the industry’s largest conference featuring data storage and memory technology solutions, presented its 2022 Best of Show Awards at a ceremony held in conjunction with this week’s event. Marvell was named a winner alongside Exascend for the collaboration of Marvell’s edge and client SSD controller with Exascend’s high-performance memory card.
Honored as the “Most Innovative Flash Memory Consumer Application,” the Exascend Nitro CFexpress card powered by Marvell’s PCIe® Gen 4, 4-NAND channel 88SS1321 SSD controller enables digital storage of ultraHD video and photos in extreme temperature environments where ruggedness, endurance and reliability are critical. The Nitro CFexpress card is unique in controller, hardware and firmware architecture in that it combines Marvell’s 12nm process node, low-power, compact form factor SSD controller with Exascend’s innovative hardware design and Adaptive Thermal Control™ technology.
The Nitro card is the highest capacity VPG 400 CFexpress card on the market, with up to 1 TB of storage, and is certified VPG400 by the CompactFlash® Association using its stringent Video Performance Guarantee Profile 4 (VPG400) qualification. Marvell’s 88SS1321 controller helps drive the Nitro card’s 1,850 MB/s of sustained read and 1,700 MB/sustained write for ultimate performance.
“Consumer applications, such as high-definition photography and video capture using professional photography and cinema cameras, require the highest performance from their storage solution. They also require the reliability to address the dynamics of extreme environmental conditions, both indoors and outdoors,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the collaboration of Marvell’s SSD controllers with Exascend’s memory cards, delivering 1,850 MB/s of sustained read and 1,700 MB/s sustained write for ultimate performance addressing the most extreme consumer workloads. Additionally, Exascend’s Adaptive Thermal Control™ technology provides an IP67 certified environmental hardening that is dustproof, water resistant and tackles the issue of overheating and thermal throttling.”
More information on the 2022 Flash Memory Summit Best of Show Award Winners can be found here.
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
Marvell is deeply committed to elevating women in STEM and supporting female engineers and entrepreneurs in their efforts to succeed in the tech industry. That’s why we are so proud to announce that Marvell team member Cora Lam has been named to the Silicon Valley Business Journal Women of Influence Class of 2022 for her successes and commitments in both workplace and community services.
Throughout the past 21 years, Marvell has been the nourishing ground to foster Cora’s professional development. With Cora’s dedication, creativity, and excellence in execution, she steadily progressed from a junior engineer to her now current position as Senior Principal Engineer in the Central Engineering group.
For the last five years, Cora also surprisingly discovered her two greatest passions within Marvell - Women in STEM and wellness. She believes you don’t need to be a CEO to promote women in STEM, instead, you just need to be your authentic self, utilizing motivation, passion, and compassion to empower others. To Cora, having genuine compassion for others without any agenda is the source of true wellness.
Despite Cora’s busy work schedule, Cora is one of the leaders and key volunteers for the Women at Marvell (WAM) initiative since its establishment in 2017. By organizing events such as International Women’s Day celebrations, speaker series, and panel discussions, WAM aims to inspire and foster a culture of diversity, gender equality and inclusion within Marvell.
Through a WAM meeting in 2018, Cora first heard about TechWomen (TW), an initiative from the U.S. Department of State that annually brings over 100 women Emerging Leaders (ELs) in STEM from 20+ countries in Africa, Central and South Asia, and the Middle East to the San Francisco Bay Area for a five-week intensive professional mentorship and networking program. For Cora, TW was like “love at first sight”, and she became Marvell’s first and only TW mentor in 2018.
By Pichai Balaji, Director, Product Marketing, Flash BU, Marvel
Industrial SSDs are specifically designed for high-performance systems where data integrity and reliability are of the utmost importance. Industrial SSDs cover a wide range of applications including industrial data storage, heavy robotics, retail kiosks, medical systems, security surveillance, video monitoring, and gaming, to name a few.
When most people hear the term “industrial SSD,” they immediately think of a ruggedized, high-temperature SSD in a metal casing. While such drives are part of the industrial class of SSDs, most industrial and edge applications have a wider range of requirements in terms of SSD controller hardware, firmware, SSD form factor, drive capacity, endurance, reliability, and use case/workload.
For these applications, it is critical that the SSD meets industrial quality standards, and long-term reliability and performance requirements. These SSD devices must be able to withstand industrial grade temperatures, as well as a higher level of shock and vibration. Some applications need these SSDs to operate in ambient temperatures ranging from -40°C to 85°C. In such extreme conditions, data loss is a serious concern.
Marvell’s 88SS1321/22 SSD controllers are designed to meet the industrial requirements on temperature endurance, longevity, and performance. Marvell’s 88SS1321 device also provides flexibility for the industrial SSD maker to choose the SSD form factor (supports 2.5” / U.2; m.2 2230 to 22110), and choose to use the SSD with or without DRAM (optional).
Exascend recently launched an industrial grade PCIe Gen 4 SSD – the PI4 Series. Powered by Marvell’s 88SS1321 PCIe Gen 4 SSD controller, the SSD offers 3500MB/s performance and can operate in an extreme temperature range of -40°C to 85°C. It offers full disk encryption / TCG OPAL 2.0 in M.2 (2280 & 2242), U.2, E1.S and CFexpress form factors for industrial and ADAS storage applications.
Marvell’s 88SS1321/22 SSD controller hardware is designed to offer SSD firmware the maximum control to optimize SSD level solutions for different workloads in a wide range of industrial and edge applications. The product’s reference design has been validated from standards/spec compliance, as well as from an electrical compatibility perspective. The board design BOM is also cost-optimized for low cost of ownership. More information on these benefits can be found here.
Additionally, various SKUs within the product offer added flexibility to SSD makers, enabling them to address applications that may require DRAM and a wider range of operating temperatures.
With the integration of AI/ML, industrial systems have become autonomous and more distributed in recent years. The proliferation of AI-based IoT (AIoT) devices has increased end-to-end system complexity, pushing compute and storage resources to the edge in order to leverage low-latency 5G connectivity and/or Ethernet Time Sensitive Networking (TSN) for real-time, mission-critical data access and processing.
Innodisk is another industrial SSD maker who has recently launched multiple PCIe Gen 4 industrial-grade SSDs with Marvell’s 88SS1321/22 SSD controllers that can operate with or without DRAM. The Innodisk PCIe 4TE and 4TG-P are the first industrial-oriented PCIe 4.0 SSDs turbocharging 5G and AIoT. The product can work in -40°C and 85°C environments, where some specific applications, including smart streetlights, 5G mmWave, and security inspection cameras, are critical for industrial strength. The PCIe 4TE and 4TG-P support AES-256 encryption and are TCG-OPAL 2.0 compliant.
Other key features of Marvell’s industrial SSD controllers include:
Marvell’s 88SS1321/22 SSD controllers are designed to allow firmware to be optimized for many different applications. A host of SKUs built on the same architecture allow SSD developers to leverage Marvell’s reference design to develop their own SSD for various form factors, capacity, endurance, and reliability standards including ruggedized, high-temp SSDs with metal casings.
Learn more about Marvell’s 88SS1321/22 product series of SSD controllers here.
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell
Automotive Transformation
Smart Car and Data Center-on-wheels are just some of the terms being used to define the exciting new waves of technology transforming the automotive industry and promising safer, greener self-driving cars and enhanced user experiences. Underpinning it all is a megatrend towards Software-defined Vehicles (SDV). SDV is not just a new automotive technology platform. It also enables a new business model for automotive OEMs. With a software-centric architecture, car makers will have an innovation platform to generate unprecedented streams of revenue from aftermarket services and new applications. For owners, the capability to receive over-the-air software updates for vehicles already on the road – as easily as smartphones are updated – means an automobile whose utility will no longer decline over time and driving experiences that can be continuously improved over time.
This blog is the first in a series of blogs that will discuss the basic components of a system that will enable the future of SDV.
Road to SDV is Paved with Ethernet
A key technology to enable SDVs is a computing platform that is supported by an Ethernet-based In-Vehicle network (IVN). An Ethernet-based IVN provides the ability to reshape the traffic between every system in the car to help meet the requirements of new downloaded applications. To gain the full potential of Ethernet-based IVNs, the nodes within the car will need to “talk” Ethernet. This includes devices such as car sensors and cameras. In this blog, we discuss the characteristics and main components that will drive the creation of this advanced Ethernet-based IVN, which will enable this new era of SDV.
But first let’s talk about the promises of this new business model. For example, people might ask, “how many new applications can possibly be created for cars and who will use them?” This is probably the same question that was asked when Apple created the original AppStore, which started with dozens of new apps, and now of course, the rest is history. We can definitely learn from this model. Plus, this is not going to be just an OEM play. Once SDV cars are on the road, we should expect the emergence of new companies that will develop for the OEMs a whole new world of car applications that will be aligned with other megatrends like Smart City, Mobility as a Service (MaaS), Ride-hailing and many others.
A New Era of Automotive Innovation
Let us now fast forward to the years 2025 to 2030 (which in the automotive industry is considered ‘just around the corner.’) New cars that are designed to support higher level of driver assist systems (ADAS) include anywhere between 20 to 30 sensors (camera, radar, lidar and others). Let’s imagine two new potential applications that could utilize these sensors:
Application 1: “Catch the Car Scratcher” - How many times have we heard of, or even been in, this situation? Someone scratches your car in the parking lot or maliciously scratches your car with a car key. What if the car was able to capture the face of the person or license plate number of the car that caused the damage? Wouldn’t that be a cool feature an OEM could provide to the car owner on demand? If priced right, it most likely could become a popular application. The application could use the accelerometers, and potentially a microphone, to detect the noise of scratching, bumping or hitting the car. Once the car identifies the scratching or bumping, it would activate all of the cameras around the car. The car would then record the video streams into a central storage. This video could later be used by the owner as necessary to recover repair costs through insurance or the courts.
Application 2: “Break-in Attempt Recording” - In this next application, when the system detects a break-in attempt, all internal and external cameras record the video into central storage and immediately upload it to the cloud. This is done in case the car thief tries to tamper with the storage later. In parallel, the user gets a warning signal or alert by phone so they can watch the video streams or even connect to the sound system in the car and scare the thief with their own voice.
We will examine these scenarios more comprehensively in a follow up blog, but these are just two simple examples of the many possible high-value automotive apps that an Ethernet-based IVN can enable in the software-defined car of the future.
Software-Defined Network
Ethernet network standards comprise a long list of features and solutions that have been developed over the years to address real network needs, including the mitigation of security threats. Ethernet was initially adopted by the automotive industry in 2014 and it has since become the dominant network in the car. Once the car’s processors, sensors, cameras and other devices are connected to each other via Ethernet (Ethernet End-to-End), we can realize the biggest promise of SDV: the capability to reprogram the in-vehicle network and adapt its main characteristics to new advanced applications. This capability is called In-Vehicle Software-Defined Networking, or in short, In-vehicle SDN.
Figure 1 shows the building blocks for In-Vehicle SDN that enable SDV.
Figure 1 – Ethernet and SDN as building blocks for SDV
Ethernet features enable four key attributes that are key for SDV: Flexibility, Scalability, Redundancy and Controllability.
In-vehicle SDN is the mechanism that provide the ability to modify and adapt these attributes in SDV. SDN is a technology that uses application programming interfaces (APIs) to communicate with underlying hardware infrastructure, like switches and bridges, and provisions traffic flow in a network. The In-Vehicle SDN allows the separation of control and data planes and brings network programmability to the realm of advanced data forwarding mechanisms in automotive networks.
Cameras and the Ethernet Edge
To realize the full capability of in-vehicle SDN, most devices in the car will need to be connected via Ethernet. In today’s advanced car architectures, the backbone of the high-speed links is all Ethernet. However, camera interfaces are still based on old proprietary point-to-point Low-Voltage Differential Signaling (LVDS) technology. Newer technologies (like MIPI’s A-PHY and ASA) are under development to replace LVDS, but these are still point-to-point solutions. In this blog we refer to all of these solutions as P2PP (Point-to-Point Protocol). In Figure 2, we show an example of a typical zonal car network with the focus on two domains that use the camera sensors: ADAS and Infotainment.
Figure 2 – Zonal network architecture with point-to-point camera links
While most of the ECUs / Sensors / Devices are connected through (and leverage the benefits of) the zonal backbone, cameras are still connected directly (point-to-point) to the processors. Cameras cannot be shared in a simple manner between the two domains (ADAS and IVI), that in many cases are in separate boxes. There is no scalability in this rigid connectivity. Redundancy is also very limited, since the cameras are connected directly to a processor, and any malfunction in this processor might result in lost connection to the cameras.
One potential “solution” for this is to connect the cameras to the zonal switches via P2PP, as shown in Figure 3.
Figure 3 – Zonal network architecture with point-to-point camera links to Zonal switch
This proposal solves only a few of the problems mentioned above but comes at a high cost. To support this configuration the system always needs a dedicated Demux chip, as showed in Figure 4, that converts the P2PP back to camera interface. In addition, to support this configuration, the Zonal switches need a dedicated video interface, like MIPI D-PHY. This interface requires 12 pins per camera (4 pairs for data, 1 pair for clock and 1 pair for control (I2C or SPI)). This adds complexity and many dedicated pins which increases system cost. Another option is to use an external Demux-switch (on top of the Zonal switch) to aggregate multiple P2PP lanes, which is expensive.
Integration of any of these protocols into the Zonal switch is also highly unlikely, since it requires dedicated, non-Ethernet ports on the switch. In addition, no one will consider integration of proprietary or new and non-matured technologies into switches or SoCs.
Figure 4 – Camera P2PP Bridge in Zonal Architecture
Next is controllability, diagnostics and real-time debugging that do not work over the P2PP links in the same simple and standard way they work over Ethernet. This limits the leverage of existing Ethernet-based SW utilities that are used to access, monitor and debug all Ethernet-based ECUs, devices and sensors in the vehicle.
Ethernet Camera Bridge
The right solution for all of these issues is to convert the camera-video to Ethernet – at the edge. A simple bridge device that connects to the camera module and encapsulates the video over Ethernet packets is all it takes, as shown in Figure 5.
Figure 5 – Ethernet Camera Bridge in Zonal Architecture
Since the in-vehicle Ethernet network is Layer 2 (L2)-based, the encapsulation of camera video over Ethernet requires a simple, hard-coded (meaning no SW) MAC block in the bridge device. Figure 6 shows a network that utilizes such bridge devices.
Figure 6 – Zonal architecture with Ethernet End-to-End
The biggest advantage of the Ethernet camera bridge is that it leverages the robustness and maturity of the Ethernet standard. For the Ethernet bridge PHY it means a proven technology (2.5G/5G/10GBASE-T1 and soon 25GBASE-T1) with a very strong ecosystem of cables, connectors, and test facilities (compliance, interoperability, EMC, etc.) that have been accepted by the automotive industry for many years.
But this is only the tip of the iceberg. Once the underlying technology for the camera interface is Ethernet, these links automatically gain access to all the other IEEE Ethernet standards, like:
These important features for automotive networks are covered in a previous Marvell blog called, “Ethernet Advanced Features for Automotive Applications.”
The Ethernet End-to-End with Ethernet camera bridges supports all four key attributes (described in Figure 1) that are required for reliable software-defined car operation: Cameras can easily be shared among domains. Software and hardware can be easily modified independently and scaled all the way up to the camera and sensors. No special video interfaces are needed in the zonal switch – the camera Ethernet link is connected to a standard Ethernet port on the switch, and can be routed on multiple paths, for redundancy. This approach offers the full support of controllability, diagnostic and real-time debugging of the camera links using standard Ethernet utilities that are used in the rest of the in-vehicle network.
So, what’s next? As camera resolutions and refresh rates increase, camera links will need to support future data rates that climb beyond 10Gbps. To support this trend, the IEEE P802.3cy Greater than 10 Gb/s Electrical Automotive Ethernet PHY Task Force is already in the process of defining a standard for 25Gbps automotive PHY. Therefore, we can expect the vehicle backbone as well as Camera Ethernet bridges of up to 25Gbps to be inevitable in the future, and with them, a plethora of even more compelling smart car apps.
Marvell Product Roadmap for Automotive
To help support these new initiatives in automotive technology application and design, Marvell announced the industry’s first multi-gig Ethernet camera bridge solution.
As shown by these announcements, Marvell continues to drive innovation in networking and compute solutions for automotive applications. The Marvell automotive roadmap includes managed Ethernet switches that support the Trusted Boot® feature to enable over-the-air upload of new system configurations, to enable new applications. Marvell custom compute products for automotive are designed in advanced process nodes and leverage Marvell’s IP portfolio of high-performance multi-core processors, end-to-end security and high-speed PHY and SerDes technologies.
To learn more about how Marvell is committed to enabling smarter, safer and greener vehicles with its innovative, end-to-end portfolio of Brightlane™ automotive solutions, check out: https://www.marvell.com/products/automotive.html.
The next blogs in this series will discuss some of the characteristics of SDN-on-wheels, central compute in future vehicles, security structure for vehicle-to-cloud connectivity, in-vehicle-network for infotainment and other exciting developments that enable the future of software-defined vehicle.
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
Data Breakthrough, an independent market intelligence organization that recognizes the top companies, technologies and solutions in the global data technology market, today announced the 2022 winners of its Data Breakthrough Awards. Marvell is thrilled to share that its Bravera™ SC5 SSD controller family was named “Semiconductor Product of the Year” in the Hardware/Components & Infrastructure category.
Marvell’s Bravera SC5 controllers are the industry’s first PCIe 5.0 SSD controllers, enabling the highest performing data center flash storage solutions. By bringing unprecedented performance, best-in-class efficiency, and leading security features, Bravera SC5 addresses the critical requirements for scalable, containerized storage for optimal cloud infrastructure. Marvell’s Bravera SC5 doubles the performance compared to PCIe 4.0 SSDs, contributing to accelerated workloads and reduced latency, dramatically improving the user experience.
“Our Bravera SC5 controllers were developed alongside cloud providers, NAND vendors and the larger ecosystem to meet the critical requirements for faster and higher bandwidth cloud storage,” said Thad Omura, vice president of marketing, Flash Business Unit at Marvell. “This award further validates the innovative feature set our solution brings to address the ever-expanding workloads in the cloud. We thank Data Breakthrough for recognizing the vital role that semiconductors play across the digital data industry.”
The Data Breakthrough award nominations were evaluated by an independent panel of experts within the larger fields of data science and technology, with the winning products and companies selected based on a variety of criteria, including most innovative and technologically advanced solutions and services.
More information about the awards can be found here.
By Peter Carson, Senior Director Solutions Marketing, Marvell
Marvell’s 5G Open RAN architecture leverages its OCTEON Fusion processor and underscores collaborations with Arm and Meta to drive adoption of no-compromise 5G Open RAN solutions
The wireless industry’s no-compromise 5G Open RAN platform will be on display at Mobile World Congress 2022. The Marvell-designed solution builds on its extensive compute collaboration with Arm and raises expectations about Open RAN capabilities for ecosystem initiatives like the Meta Connectivity Evenstar program, which is aimed at expanding the global adoption of Open RAN. Last year at MWC, Marvell announced it had joined the Evenstar program [read more]. This year, Marvell’s new 5G Open RAN Accelerator will be on display at the Arm booth at MWC 2022. The OCTEON Fusion processor, which integrates 5G in-line acceleration and Arm Neoverse CPUs, is the foundation for Marvell’s Open RAN DU reference design.
5G is going mainstream with the rapid rollout of next generation networks by every major operator worldwide. The ability of 5G to reliably provide high bandwidth and extremely low latency connectivity is powering applications like metaverse, autonomous driving, industrial IoT, private networks, and many more. 5G is a massive undertaking that is set to transform entire industries and serve the world’s diverse connectivity needs for years to come. But the wireless networks at the center of this revolution are, themselves, undergoing a major transformation – not just in feeds and speeds, but in architecture. More specifically, significant portions of the 5G radio access network (RAN) are moving into the cloud.
By Peter Carson, Senior Director Solutions Marketing, Marvell
5G networks are evolving to a cloud-native architecture with Open RAN at the center. This explainer series is aimed at de-mystifying the challenges and complexity in scaling these emerging open and virtualized radio access networks. Let’s start with the compute architecture.
Open RAN systems based on legacy compute architectures utilize an excessively high number of CPU cores and energy to support 5G Layer 1 (L1) and other data-centric processing, like security, networking and storage virtualization. As illustrated in the diagram below, this leaves very few host compute resources available for the tasks the server was originally designed to support. These systems typically offload a small subset of 5G L1 functions, such as forward error correction (FEC), from the host to an external FPGA-based accelerator but execute the processing offline. This kind of look-aside (offline) processing of time-critical L1 functions outside the data path adds latency that degrades system performance.
Image: Limitations of Open RAN systems based on general purpose processors
By Todd Owens, Field Marketing Director, Marvell and Jacqueline Nguyen, Marvell Field Marketing Manager
Storage area network (SAN) administrators know they play a pivotal role in ensuring mission-critical workloads stay up and running. The workloads and applications that run on the infrastructure they manage are key to overall business success for the company.
Like any infrastructure, issues do arise from time to time, and the ability to identify transient links or address SAN congestion quickly and efficiently is paramount. Today, SAN administrators typically rely on proprietary tools and software from the Fibre Channel (FC) switch vendors to monitor the SAN traffic. When SAN performance issues arise, they rely on their years of experience to troubleshoot the issues.
What creates congestion in a SAN anyway?
Refresh cycles for servers and storage are typically shorter and more frequent than that of SAN infrastructure. This results in servers and storage arrays that run at different speeds being connected to the SAN. Legacy servers and storage arrays may connect to the SAN at 16GFC bandwidth while newer servers and storage are connected at 32GFC.
Fibre Channel SANs use buffer credits to manage the prioritization of the traffic flow in the SAN. When a slower device intermixes with faster devices on the SAN, there can be situations where response times to buffer credit requests slow down, causing what is called “Slow Drain” congestion. This is a well-known issue in FC SANs that can be time consuming to troubleshoot and, with newer FC-NVMe arrays, this problem can be magnified. But these days are soon coming to an end with the introduction of what we can refer to as the self-driving SAN.
By Matt Bolig, Director, Product Marketing, Networking Interconnect, Marvell
There’s been a lot written about 5G wireless networks in recent years. It’s easy to see why; 5G technology supports game-changing applications like autonomous driving and smart city infrastructure. Infrastructure investment in bringing this new reality to fruition will take many years and 100’s of billions of dollars globally, as figure 1 below illustrates.
Figure 1: Cumulative Global 5G RAN Capex in $B (source: Dell’Oro, July 2021)
When considering where capital is invested in 5G, one underappreciated aspect is just how much wired infrastructure is required to move massive amounts of data through these wireless networks.
By Khurram Malik, Senior Manager, Technical Marketing, Marvell
A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.
Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.
By Todd Owens, Field Marketing Director, Marvell
For the past two decades, Fibre Channel has been the gold standard protocol in Storage Area Networking (SAN) and has been a mainstay in the data center for mission-critical workloads, providing high-availability connectivity between servers, storage arrays and backup devices. If you’re new to this market, you may have wondered if the technology’s origin has some kind of British backstory. Actually, the spelling of “Fibre” simply reflects the fact that the protocol supports not only optical fiber but also copper cabling; though the latter is for much shorter distances.
During this same period, servers matured into multicore, high-performance machines with significant amounts of virtualization. Storage arrays have moved away from rotating disks to flash and NVMe storage devices that deliver higher performance at much lower latencies. New storage solutions based on hyperconverged infrastructure have come to market to allow applications to move out of the data center and closer to the edge of the network. Ethernet networks have gone from 10Mbps to 100Gbps and beyond. Given these changes, one would assume that Fibre Channel’s best days are in the past.
The reality is that Fibre Channel technology remains the gold standard for server to storage connectivity because it has not stood still and continues to evolve to meet the demands of today’s most advanced compute and storage environments. There are several reasons Fibre Channel is still favored over other protocols like Ethernet or InfiniBand for server to storage connectivity.
By Gidi Navon, Senior Principal Architect, Marvell
In part one of this blog, we discussed the ways the Radio Access Network (RAN) is dramatically changing with the introduction of 5G networks and the growing importance of network visibility for mobile network operators. In part two of this blog, we’ll delve into resource monitoring and Open RAN monitoring, and further explain how Marvell’s Prestera® switches equipped with TrackIQ visibility tools can ensure the smooth operation of the network for operators.
Resource monitoring
Monitoring latency is a critical way to identify problems in the network that result in latency increase. However, if measured latency is high, it is already too late, as the radio networks have already started to degrade. The fronthaul network, in particular, is sensitive to even a small increase in latency. Therefore, mobile operators need to ensure the fronthaul segment is below the point of congestion thus achieving extremely low latencies.
Visibility tools for Radio Access Networks need to measure the utilization of ports, making sure links never get congested. More precisely, they need to make sure the rate of the high priority queues carrying the latency sensitive traffic (such as eCPRI user plane data) is well below the allocated resources for such a traffic class.
A common mistake is measuring rates on long intervals. Imagine a traffic scenario over a 100GbE link, as shown in Figure 1, with quiet intervals and busy intervals. Checking the rate over long intervals of seconds will only reveal the average port utilization of 25%, giving the false impression that the network has high margins, without noticing the peak rate. The peak rate, which is close to 100%, can easily lead to egress queue congestion, resulting in buffer buildup and higher latencies.
By Radha Nagarajan, SVP and CTO, Optical and Copper Connectivity Business Group
As the volume of global data continues to grow exponentially, data center operators often confront a frustrating challenge: how to process a rising tsunami of terabytes within the limits of their facility’s electrical power supply – a constraint imposed by the physical capacity of the cables that bring electric power from the grid into their data center.
Fortunately, recent innovations in optical transmission technology – specifically, in the design of optical transceivers – have yielded tremendous gains in energy efficiency, which frees up electric power for more valuable computational work.
Recently, at the invitation of the Institute of Electrical and Electronics Engineers, my Marvell colleagues Ilya Lyubomirsky, Oscar Agazzi and I published a paper detailing these technological breakthroughs, titled Low Power DSP-based Transceivers for Data Center Optical Fiber Communications.
By Gidi Navon, Senior Principal Architect, Marvell
The Radio Access Network (RAN) is dramatically changing with the introduction of 5G networks and this, in turn, is driving home the importance of network visibility. Visibility tools are essential for mobile network operators to guarantee the smooth operation of the network and for providing mission-critical applications to their customers.
In this blog, we will demonstrate how Marvell’s Prestera® switches equipped with TrackIQ visibility tools are evolving to address the unique needs of such networks.
The changing RAN
The RAN is the portion of a mobile system that spans from the cell tower to the mobile core network. Until recently, it was built from vendor-developed interfaces like CPRI (Common Public Radio Interface) and typically delivered as an end-to-end system by one RAN vendor in each contiguous geographic area.
Lately, with the introduction of 5G services, the RAN is undergoing several changes as shown in Figure 1 below:
By Amit Thakkar, Senior Director, Product Management, Marvell
The retail segment of the global economy has been one of the hardest hit by the Covid-19 pandemic. Lockdowns shuttered stores for extended periods, while social distancing measures significantly impacted foot traffic in these spaces. Now, as consumer demand has shifted rapidly from physical to virtual stores, the sector is looking to reinvent itself and apply lessons learned from the pandemic. One important piece of knowledge that has surfaced across the retail industry: Investing in critical data infrastructure is a must in order to rapidly accommodate changes in consumption patterns.
Consumers have become much more conscious of the digital experience and, as such, prefer a seamless transition in shopping experiences across both virtual and brick-and-mortar stores. Retailers are revisiting investment in network infrastructure to ensure that the network is “future-proofed” to withstand consumer demand swings. It will be critical to offer new customer-focused, personalized experiences such as cashier-less stores and smart shopping in a manner that is secure, resilient, and high performance. Infrastructure companies will need to be able to bring a complete set of technology options to meet the digital transformation needs of the modern distributed enterprise.
Highlighted below are five emerging technology trends in enterprise networking that are driving innovations in the retail industry to build the modern store experience.
By Khurram Malik, Senior Manager, Technical Marketing, Marvell
As data growth continues at a tremendously rapid pace, data centers have a strong demand for scalable, flexible, and high bandwidth utilization of storage solutions. Data centers need an efficient infrastructure to meet the growing requirements of next-generation high performance computing (HPC), machine learning (ML)/artificial intelligence (AI), composable disaggregated infrastructure (CDI), and storage expansion shelf applications which necessitate scalable, high performance, and cost-efficient technologies. Hyperscalers and storage OEMs tend to scale system-level performance linearly, driven by the number of NVMe SSDs that plug into the system. However, current NVMe-oF storage target Just-A-Bunch-Of-Flash (JBOF) architecture connects fast performance NVMe SSDs behind the JBOF components, causing system-level performance bottlenecks due to CPU, DRAM, PCIe switch and smartNIC bandwidth. In addition, JBOF architecture requires a fixed ratio of CPU and SSDs which results in underutilized resources. Another challenge with JBOF architecture is the scalability of CPU, DRAM, and smartNIC devices does not match the total bandwidth of corresponding NVMe SSDs in the system due to the overall system cost overhead and thus, impacts system-level performance.
Marvell introduced its industry-first NVMe-oF to NVMe SSD converter controller, the 88SN2400, as a data center storage solution application. It enables the industry to introduce EBOF storage architecture which provides an innovative approach to address JBOF architecture challenges, and truly disaggregate storage from the compute. EBOF architecture replaces JBOF bottleneck components like CPUs, DRAM and smartNICs with Ethernet switch and terminates NVMe-oF either on the bridge or Ethernet SSD. Marvell is enabling NAND vendors to offer Ethernet SSD products. EBOF architecture allows scalability, flexibility, and full utilization of PCIe NVMe drives.
By Alik Fishman, Director of Product Management, Marvell
Blink your eyes. That’s how fast data will travel from your future 5G-enabled device, over the network to a server and back. Like Formula 1 racing cars needing special tracks for optimal performance, 5G requires agile networking transport infrastructure to unleash its full potential. The 5G radio access network (RAN) requires not only base stations with higher throughputs and soaring speeds but also an advanced transport network, capable of securely delivering fast response times to mobile end points, whatever those might be: phones, cars or IoT devices. Radio site densification and Massive Machine-type Communication (mMTC) technology are rapidly scaling the mobile network to support billions of end devices1, amplifying the key role of network transport to enable instant and reliable connectivity.
With Ethernet being adopted as the most efficient transport technology, carrier routers and switches are tasked to support a variety of use cases over shared infrastructure, driving the growth in Ethernet gear installations. In traditional cellular networks, baseband and radio resources were co-located and dedicated at each cell site. This created significant challenges to support growth and shifts in traffic patterns with available capacity. With the emergence of more flexible centralized architectures such as C-RAN, baseband processing resources are pooled in base station hubs called central units (CUs) and distributed units (DUs) and dynamically shared with remote radio units (RUs). This creates even larger concentrations of traffic to be moved to and from these hubs over the network transport.
By Rohan Gandhi, Product Marketing Manager, Optical and Copper Connectivity
When the London Underground opened its first line in 1863, a group of doubtful dignitaries boarded a lurching, smoke-belching train for history’s inaugural subway ride. The next day, thirty thousand curious Londoners flooded the nascent system, and within a year, more than nine million had embraced its use. Nearly 160 years later, that original tunnel is still in daily use, joining 250 miles of track that carry more than 1.3 billion passengers annually.
What were the keys to such extraordinary growth? Not just popular demand for more tunnels, but also better use of accumulated infrastructure – optimized through newer trains, enhanced signaling, greater energy efficiency, and smarter scheduling. In a sense, the Tube’s transformation mirrors the fundamental challenge now confronting modern data centers: how to make better use of existing infrastructure to handle today’s exponential growth of data.
PAM4 DSP Technology is Fast and Flexible
To keep up with the surging data demands of new video and AI workloads, modern data centers can’t simply add more and bigger pipes – at least not cost-effectively. They need PAM4 based optical module solutions to effectively and efficiently move more bandwidth at higher speeds. In addition, they need to be able to update the optical modules via software, optimizing existing infrastructure at an affordable price.
By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell
Recently, Microsoft® announced the general availability of Windows® Server 2022, a release that us geeks refer to with its codename “Iron.” At Marvell we have long worked to integrate our server connectivity solutions into Windows and like to think of the Marvell® QLogic® Fibre Channel (FC) technology as that tiny bit of “carbon” that turns “iron” to “steel” – strong yet flexible and designed to make business applications shine. Let’s dive into the bits and bytes of how the combination of Windows Server 2022 and Marvell QLogic FC makes for great chemistry.
If you ask hybrid cloud IT managers and architects to identify the three things they need more of from their IT infrastructure, the responses would resoundingly focus on the following: improved security, scalability that does not break the bank, and an easy way to manage the hardest things. Based on the input from our customers on the challenges that they face in today’s demanding and evolving IT environments, Marvell has continued to enhance its QLogic FC technology to address these critical requirements. Marvell QLogic FC technology builds on the new features of Microsoft Windows Server 2022 and further extends the security, scalability and management capabilities to offer server connectivity solutions that are designed specifically with our customers’ needs in mind.
By Radha Nagarajan, SVP and CTO, Optical and Copper Connectivity Business Group
The exponential increase in bandwidth demand will drive continuous innovation in, and deployment of, data movement interconnects for Cloud and Telecom providers. As a result, highly integrated silicon photonics platform solutions are expected to become a key enabling technology for the cloud and telecom market over the next decade.
What Does Highly Integrated Silicon Photonics Platform Mean for the Infrastructure Business?
As speed continues to go up, optical will replace copper as the primary conduit of the digital bits inside Cloud data centers. Marvell is investing heavily in silicon photonics to complement our high-speed CMOS technologies in data center interconnects to accelerate this transition.
By Marvell, PR Team
Last week, Moor Insights and Futurum Research kicked off The Six Five Summit, a virtual, on demand event focused on the latest developments and trends in digital transformation. Marvell was thrilled to join alongside the world’s leading technology companies to share insights on strategy, innovation and where the industry is heading.
Marvell’s Raghib Hussain, President, Products and Technologies participated in the event’s Cloud and Infrastructure Day to discuss the evolution of the cloud data center including the shift from application-specific to data-centric compute. In his presentation, “Accelerating the Cloud Data Center Evolution,” Raghib focuses on how scalability, performance and efficiency are driving technology infrastructure requirements and why optimized and customized silicon solutions are the future of the cloud.
By Ian Sagan, Marvell Field Applications Engineer and Jacqueline Nguyen, Marvell Field Marketing Manager and Nick De Maria, Marvell Field Applications Engineer
Have you ever been stuck in bumper-to-bumper traffic? Frustrated by long checkout lines at the grocery store? Trapped at the back of a crowded plane while late for a connecting flight?
Such bottlenecks waste time, energy and money. And while today’s digital logjams might seem invisible or abstract by comparison, they are just as costly, multiplied by zettabytes of data struggling through billions of devices – a staggering volume of data that is only continuing to grow.
Fortunately, emerging Non-Volatile Memory Express technology (NVMe) can clear many of these digital logjams almost instantaneously, empowering system administrators to deliver quantum leaps in efficiency, resulting in lower latency and better performance. To the end user this means avoiding the dreaded spinning icon and getting an immediate response.
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell
In the classic 1980s “Back to the Future” movie trilogy, Doc Brown – inventor of the DeLorean time machine – declares that "your future is whatever you make it, so make it a good one.” At Marvell, engineers are doing just that by accelerating automotive Ethernet capabilities: Earlier this week, Marvell announced the latest addition to its automotive products portfolio – the 88Q4346 802.3ch-based multi-gig automotive Ethernet PHY.
This technology addresses three emerging automotive trends requiring multi-gig Ethernet speeds, including:
By Marvell, PR Team
At the most recent FierceWireless 5G Blitz Week, some of the world’s leading 5G innovators met via webinar to discuss the potential of O-RAN and challenges of the ongoing 5G rollout. In a keynote, EVP and General Manager of Marvell’s Processors Business Group Raj Singh explored the accelerating shift to O-RAN, which is an emerging open-source architecture for Radio Access Networks that enables customers to create better 5G applications by mixing and matching RAN technology from different vendors.
O-RAN architectures are compelling because they increase competition among vendors, reduce costs, and offer customers greater flexibility to combine RAN elements according to their application’s specific use cases. However, in addition to their obvious benefits, O-RAN solutions also raise operator concerns including potential challenges with integration, legacy support, interoperability and security – issues that Marvell and other companies in the Open RAN Policy Coalition are addressing through shared standards, proven solutions and innovative approaches.
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Bergen, Sr. Product Marketing Manager, Automotive Business Unit, Marvell
In the early decades of American railroad construction, competing companies laid their tracks at different widths. Such inconsistent standards drove inefficiencies, preventing the easy exchange of rolling stock from one railroad to the next, and impeding the infrastructure from coalescing into a unified national network. Only in the 1860s, when a national standard emerged – 4 feet, 8-1/2 inches – did railroads begin delivering their true, networked potential.
Some one hundred-and-sixty years later, as Marvell and its competitors race to reinvent the world’s transportation networks, universal design standards are more important than ever. Recently, Marvell’s 88Q5050 Ethernet Device Bridge became the first of its type in the automotive industry to receive Avnu certification, meeting exacting new technical standards that facilitate the exchange of information between diverse in-car networks, which enable today’s data-dependent vehicles to operate smoothly, safely and reliably.
By Wolfgang Sauter, Customer Solutions Architect - Packaging, Marvel
The continued evolution of 5G wireless infrastructure and high-performance networking is driving the semiconductor industry to unprecedented technological innovations, signaling the end of traditional scaling on Single-Chip Module (SCM) packaging. With the move to 5nm process technology and beyond, 50T Switches, 112G SerDes and other silicon design thresholds, it seems that we may have finally met the end of the road for Moore’s Law.1 The remarkable and stringent requirements coming down the pipe for next-generation wireless, compute and networking products have all created the need for more innovative approaches. So what comes next to keep up with these challenges? Novel partitioning concepts and integration at the package level are becoming game-changing strategies to address the many challenges facing these application spaces.
During the past two years, leaders in the industry have started to embrace these new approaches to modular design, partitioning and package integration. In this paper, we will look at what is driving the main application spaces and how packaging plays into next-generation system architectures, especially as it relates to networking and cloud data center chip design.
By Todd Owens, Field Marketing Director, Marvell
Today, operating systems (OSs) like VMware recommend that OS data be kept completely separated from user data using non-network RAID storage. This is a best practice for any virtualized operating system including VMware, Microsoft Azure Stack HCI (Storage Spaces Direct) and Linux. Thanks to innovative flash memory technology from Marvell, a new secure, reliable and easy-to-use OS boot solution is now available for Hewlett Packard Enterprise (HPE) servers.
While there are 32GB micro-SD or USB boot device options available today — with VMware requiring as much as 128GB of storage for the OS and Microsoft Storage Spaces Direct needing 200GB — these solutions simply don’t have the storage capacity needed. Using hardware RAID controllers and disk drives in the server bays is another option. However, this adds significant cost and complexity to a server configuration just to meet the OS requirement. The proper solution to address separating the OS from user data is the HPE NS204i-p NVME OS Boot Device.
By Gidi Navon, Senior Principal Architect, Marvell
The current environment and an expected “new normal” are driving the transition to a borderless enterprise that must support increasing performance requirements and evolving business models. The infrastructure is seeing growth in the number of endpoints (including IoT) and escalating demand for data such as high-definition content. Ultimately, wired and wireless networks are being stretched as data-intensive applications and cloud migrations continue to rise.
By Lindsey Moore, Marketing Coordinator, Marvell
Flash Memory Summit, the industry's largest trade show dedicated to flash memory and solid-state storage technology, presented its 2020 Best of Show Awards yesterday in a virtual ceremony. Marvell, alongside Hewlett Packard Enterprise (HPE), was named a winner for "Most Innovative Flash Memory Technology" in the controller/system category for the Marvell NVMe RAID accelerator in the HPE OS Boot Device.
Last month, Marvell introduced the industry’s first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. HPE is the first of Marvell's partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems. The solution lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance.
By Lindsey Moore, Marketing Coordinator, Marvell
The BayAreaCIO recognized chief information officers in eight key categories – Leadership, Super Global, Global, Large Enterprise, Enterprise, Large Corporate, Corporate, and Nonprofit/Public Sector.
“The BayAreaCIO ORBIE winners demonstrate the value great leadership creates. Especially in these uncertain times, CIOs are leading in unprecedented ways and enabling the largest work-from-home experiment in history,” according to Lourdes Gipson, Executive Director of BayAreaCIO. “The ORBIE Awards are meaningful because they are judged by peers - CIOs who understand how difficult this job is and why great leadership matters.”
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Yesterday Marvell announced its intent to join forces with Inphi, a leader in high-speed data movement. A premier company in the semiconductor industry, and one of the highest regarded companies in our space, Inphi’s highly complementary portfolio accelerates growth and leadership in cloud and 5G. The combination of the two companies is expected to create a U.S. semiconductor powerhouse with an enterprise value of approximately $40 billion.
Combined with explosive Internet traffic growth and the rollout of new ultra-fast 5G wireless networks, the importance of Inphi’s high-speed data interconnect solutions will only accelerate. The merged company will be uniquely positioned to serve the data-driven world, addressing high growth, attractive end markets – cloud data center and 5G.
President and CEO of Marvell, Matt Murphy had the opportunity to discuss the deal with CNBC’s Squawk Alley team after the news broke yesterday morning. Catch a replay of that video broadcast here to learn more.
Press Release: (Click Here)
Analyst commentary from Patrick Moorhead of Moor Insights & Strategy and
Daniel Newman of Futurum Research: (Click Here)
By Shahar Noy, Senior Director, Product Marketing
You are an avid gamer. You spend countless hours in forums to decide between the ASUS TUF components and researching Radeon RX 500 or GeForce RTX 20, to ensure games would show at their best on your hard-earned PC gaming rig. You made your selection and can’t stop bragging about your system’s ray tracing capabilities and how realistic is the “Forza Motorsport 7” view from your McLaren F1 GT cockpit when you drive through the legendary Le Mans circuit at dusk. You are very proud of your machine and the year 2020 is turning out to be good: Microsoft finally launched the gorgeous looking “Flight Simulator 2020,” and CD Projekt just announced that the beloved and award-winning “The Witcher 3” is about to get an upgrade to take advantage of the myriad of hardware updates available to serious gamers like you. You have your dream system in hand and life can’t be better.
By Gidi Navon, Senior Principal Architect, Marvell
Enterprise networks are changing, adapting and expanding to become a borderless enterprise. Visibility tools must evolve to meet the new requirements of an enterprise that now extends beyond the traditional campus — across multi-cloud environments to the edge.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Following the company’s 2020 Investor Day, Marvell President and CEO, Matt Murphy, joined Jim Cramer on CNBC’s Mad Money to discuss yesterday’s event highlights. Calling out significant growth opportunities across Marvell’s key market segments – including #5G, #DataCenter #Cloud and #Automotive – Murphy noted that adoption of both 5G and Cloud remain in the early innings and that Marvell is well positioned to see continued benefits from these long-term growth markets.
As working from home accelerates digital transformation, Marvell is building the next generation data infrastructure semiconductor technology that will power the world’s progress.
Watch more videos on the Marvell YouTube Channel & Subscribe: (Click Here)
By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell
Ethernet standards comprise a long list of features and solutions that have been developed over the years to resolve real network needs as well as resolve security threats. Now, developers of Ethernet In-Vehicle-Networks (IVN) can easily balance between functionality and cost by choosing the specific features they would like to have in their car’s network.
The roots of Ethernet technology began in 1973, when Bob Metcalfe, a researcher at Xerox Research Center (who later founded 3COM), wrote a memo entitled “Alto Ethernet,” which described how to connect computers over short-distance copper cable. With the explosion of PC-based Local Area Networks (LAN) in businesses and corporations in the 1980s, the growth of client/server LAN architectures continued, and Ethernet started to become the connectivity technology of choice for these networks. However, the Ethernet advancement that made it the most successful networking technology ever was when standardization efforts began for it under the IEEE 802.3 group.
By Raghib Hussain, President, Products and Technologies
Last week, Marvell announced a change in our strategy for ThunderX, our Arm-based server-class processor product line. I’d like to take the opportunity to put some more context around that announcement, and our future plans in the data center market.
ThunderX is a product line that we started at Cavium, prior to our merger with Marvell in 2018. At Cavium, we had built many generations of successful processors for infrastructure applications, including our Nitrox security processor and OCTEON infrastructure processor. These processors have been deployed in the world’s most demanding data-plane applications such as firewalls, routers, SSL-acceleration, cellular base stations, and Smart NICs. Today, OCTEON is the most scalable and widely deployed multicore processor in the market.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Marvell President and CEO, Matt Murphy, discussed Marvell’s second quarter Earnings beat this morning with the CNBC Squawk Alley team.
Marvell’s growth is being driven by our success in our key data infrastructure end markets. Particularly, in looking at 5G wireless infrastructure we have seen 4 consecutive quarters of sequential growth. Right now, this is particularly pronounced in China, where 5G is being rolled out. But with other countries working on rollout plans, and 4 of the top 5 base station vendors as Marvell customers, the growth from 5G is just beginning.
Marvell also has a large and growing data center business, with both enterprise on-prem datacenters and now the cloud. We announced last quarter that cloud is now over 10% of our revenue and growing fast. And the reason we are seeing strong growth is that we are producing the key storage and security products for cloud. This includes chips for huge multi-terabyte hard drives, where all cloud data is stored. It also includes our networking products, which doubled year-over-year. And finally, growth in this area includes Marvell’s custom products that came to us through a recent acquisition. This is how several of the larger datacenter operators like to buy chips. Where we build exactly what they want.
Watch the full interview here.
By Todd Owens, Field Marketing Director, Marvell
As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?
Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and -- offering 64,000 command queues with 64,000 commands per queue -- can provide much more scalability than other storage protocols.
By Todd Owens, Field Marketing Director, Marvell
Hewlett Packard Enterprise (HPE) recently updated its product naming protocol for the Ethernet adapters in its HPE ProLiant and HPE Apollo servers. Its new approach is to include the ASIC model vendor’s name in the HPE adapter’s product name. This commonsense approach eliminates the need for model number decoder rings on the part of Channel Partners and the HPE Field team and provides everyone with more visibility and clarity. This change also aligns more with the approach HPE has been taking with their “Open” adapters on HPE ProLiant Gen10 Plus servers. All of this is good news for everyone in the server sales ecosystem, including the end user. The products’ core SKU numbers remain the same, too, which is also good.
By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell
Marvell® Fibre Channel HBAs are getting a promotion and here is the announcement email -
“I am pleased to announce the promotion of “Mr. QLogic® Fibre Channel” to Senior Transport Officer, Storage Connectivity at Enterprise Datacenters Inc. Mr. QLogic has been an excellent partner and instrumental in optimizing mission critical enterprise application access to external storage over the past 20 years. When Mr. QLogic first arrived at Enterprise Datacenters, block storage was in a disarray and efficiently scaling out performance seemed like an unsurmountable challenge. Mr. QLogic quickly established himself as a go-to leader and trusted partner for enabling low latency access to external storage across disk and flash. Mr. QLogic successfully collaborated with other industry leaders like Brocade and Mr. Cisco MDS to lay the groundwork for a broad set of innovative technologies under the StorFusion™ umbrella. In his new role, Mr. QLogic will further extend the value of StorFusion by bringing awareness of Storage Area Network (SAN) congestion into the server, while taking decisive action to prevent bottlenecks that may degrade mission critical enterprise application performance.
Please join me in congratulating QLogic on this well-deserved promotion.”
By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell
Once upon a time, data centers confronted a big problem – how to enable business-critical applications on servers to access distant storage with exceptional reliability. In response, the brightest storage minds invented Fibre Channel. Its ultra-reliability came from being implemented on a dedicated network and buffer-to-buffer credits. For a real-life parallel, think of a guaranteed parking spot at your destination, and knowing it’s there before you leave your driveway. That worked fairly well. But as technology evolved and storage changed from spinning media to flash memory with NVMe interfaces, the same bright minds developed FC-NVMe. This solution delivered a native NVMe storage transport without necessitating rip-and-replace by enabling existing 16GFC and 32GFC HBAs and switches to do FC-NVMe. Then came a better understanding of how cosmic rays affect high-speed networks, occasionally flipping a subset of bits, introducing errors.
By Alik Fishman, Director of Product Management, Marvell
In our series Living on the Network Edge, we have looked at the trends driving Intelligence, Performance and Telemetry to the network edge. In this installment, let’s look at the changing role of network security and the ways integrating security capabilities in network access can assist in effectively streamlining policy enforcement, protection, and remediation across the infrastructure.
Cybersecurity threats are now a daily struggle for businesses experiencing a huge increase in hacked and breached data from sources increasingly common in the workplace like mobile and IoT devices. Not only are the number of security breaches going up, they are also increasing in severity and duration, with the average lifecycle from breach to containment lasting nearly a year1 and presenting expensive operational challenges. With the digital transformation and emerging technology landscape (remote access, cloud-native models, proliferation of IoT devices, etc.) dramatically impacting networking architectures and operations, new security risks are introduced. To address this, enterprise infrastructure is on the verge of a remarkable change, elevating network intelligence, performance, visibility and security2.
By Suresh Ravindran, Senior Director, Software Engineering
So far in our series Living on the Network Edge, we have looked at trends driving Intelligence and Performance to the network edge. In this blog, let’s look into the need for visibility into the network.
As automation trends evolve, the number of connected devices is seeing explosive growth. IDC estimates that there will be 41.6 billion connected IoT devices generating a whopping 79.4 zettabytes of data in 20251. A significant portion of this traffic will be video flows and sensor traffic which will need to be intelligently processed for applications such as personalized user services, inventory management, intrusion prevention and load balancing across a hybrid cloud model. Networking devices will need to be equipped with the ability to intelligently manage processing resources to efficiently handle huge amounts of data flows.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Chris Koopmans, Executive Vice President of Marketing and Business Operations, recently joined Patrick Moorhead and Daniel Newman, hosts of the of The Six Five – Insiders Edition, to discuss the future of semiconductors and the critical role they’re playing in 5G, cloud, the automotive revolution, and the borderless enterprise.
I highly encourage you to watch the video all the way to the end. Don’t have the time to catch the full episode? Skip to the closing commentary from the analysts where you’ll hear Pat and Dan discuss Marvell’s mission and focus on transforming the data infrastructure architecture of the future.
By George Hervey, Principal Architect, Marvell
In the previous TIPS to Living on the Edge, we looked at the trend of driving network intelligence to the edge. With the capacity enabled by the latest wireless networks, like 5G, the infrastructure will enable the development of innovative applications. These applications often employ a high-frequency activity model, for example video or sensors, where the activities are often initiated by the devices themselves generating massive amounts of data moving across the network infrastructure. Cisco’s VNI Forecast Highlights predicts that global business mobile data traffic will grow six-fold from 2017 to 2022, or at an annual growth rate of 42 percent1, requiring a performance upgrade of the network.
By George Hervey, Principal Architect, Marvell
The mobile phone has become such an essential part of our lives as we move towards more advanced stages of the “always on, always connected” model. Our phones provide instant access to data and communication mediums, and that access influences the decisions we make and ultimately, our behavior.
According to Cisco, global mobile networks will support more than 12 billion mobile devices and IoT connections by 2022.1 And these mobile devices will support a variety of functions. Already, our phones replace gadgets and enable services. Why carry around a wallet when your phone can provide Apple Pay, Google Pay or make an electronic payment? Who needs to carry car keys when your phone can unlock and start your car or open your garage door? Applications now also include live streaming services that enable VR/AR experiences and sharing in real time. While future services and applications seem unlimited to the imagination, they require next-generation data infrastructure to support and facilitate them.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Marvell President and CEO, Matt Murphy, sat down with Jim Cramer of CNBC’s Mad Money for a virtual chat about Marvell’s focus on data infrastructure opportunities. Jim opened the segment congratulating Matt on a spectacular quarter and the company’s 25th anniversary: “This is not the Marvell of 25 years ago or even 25 months ago.”
With the company’s new brand identity on display, Matt explained: “It’s been a great 25-year celebration of the company this year. We’ve gone through a pretty substantial transformation really over the last 3-4 years…we set out to transform the company into a long-term player with a real focus around what we viewed as the data infrastructure opportunity and as you can see, as the strategy has played out over the last few years its extremely relevant in today’s environment with now major growth drivers of the company and our back in 5G, cloud, segments of enterprise and automotive.”
By Chris Koopmans, EVP of Marketing and Business Operations
Our transformation
Today we launched Marvell’s new brand identity. In many ways it’s long overdue – as it represents the transformation journey we have been on over the past four years and reflects the new company we have already become.
Often when a company embarks on a major change, one of the first things they focus on is their external image. They update their logo, web page, and collateral to resemble the brand they aspire to, and then work to make reality match the aspiration. But in our industry, a company’s brand is their reputation, and a reputation is earned. That’s why, at Marvell, we started with the hard part – we established our new strategy, transformed our business and revamped our culture first, and now we are revealing a new brand that reflects who we are. I believe that signifies our culture – focus on the substance first.
I joined Marvell four years ago – just a few weeks prior to Matt taking over as CEO – so I’ve been on this journey every step of the way. I’ve had the privilege of holding a variety of leadership roles through the transformation, giving me a unique perspective on the company we have become. So, when Matt came to me last year with a new challenge – to lead our Marketing team through a rebuild of Marvell’s external image – I was thrilled about the opportunity.
By Matt Murphy, Chairman and Chief Executive Officer, Marvell
The world as we knew it just a few months ago may never be quite the same again. This pandemic has created an extraordinary crisis that will test our systems and our values alike. It also reminds us of what we hold most dear – our personal connections with other people. As billions of us shelter in our homes around the clock – working, exercising, educating, and sharing the same space - it is an opportunity to appreciate quality time with our immediate family. But we are also missing our connections with loved ones who aren’t under the same roof, and worry about the health and safety of our more vulnerable relatives. We miss seeing our colleagues at work – not on a computer screen, but in hallways and over lunch. And we miss seeing friends in our communities, enjoying a meal at our favorite restaurant, shopping at the farmers market, and hiking in our now shuttered parks.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Marvell President and CEO, Matt Murphy, joined Jim Cramer and the team of CNBC’s Squawk Alley to talk about the impact of COVID-19 on the semiconductor market, including 5G, the broader business community and the company’s unwavering mission.
Matt shared his thoughts on the critical role that semiconductor technology plays in the current world crisis, particularly given the excess load that has been put on every nation’s data infrastructure due to remote work. And, how semiconductors are the essential building blocks of the networks of the world – from 5G to cloud infrastructure to advanced interconnect products.
“Companies are re-thinking their work force and footprint and 5G is a critical part of that. “5G will play a key role not only because of the bandwidth, improved data rates, but the lower latency and improved reliability of that network I think will be a big deal for remote work and different ways of working in the future.”
By Gopal Hegde, Vice President and General Manager of the Server Processor Business Unit, Marvell
The data centers of today have shifted from a focus on single thread performance to performance at rack scale with performance/watt, performance/$ and overall TCO being the key drivers to deployment. These data centers are making use of servers that are customized for specific workloads. The applications running on these servers are either based on open source software or controlled by the customers who deploy them. Marvell’s ThunderX2® server processor is a leading example of this evolution in the server market with deployments spanning across the cloud and HPC market segments with major customers like Microsoft Azure and the Astra Top 500 Supercomputer installation at Sandia National Laboratories.
By Avishai Ziv, General Manager of Security Solutions Business Unit, Marvell
Given a choice, few enterprises will change their security solutions and deployments, especially for data encryption. That’s because any change in data encryption can be a painful and daunting task. But in today’s world of growing threats to data security, is there really a choice?
Risk is High
Most of the data breaches we read about in the headlines, which have affected hundreds of millions of customers at prominent companies, have involved data that wasn’t encrypted. Unfortunately, such lax security is all too common. A recent McAfee survey of 12,000 companies that revealed only 9% encrypt their data at rest in the cloud, and only 1% are using customer-managed encryption keys.
By Graham Forrest, Server and Storage Practice Lead, Enterprise Group
gforrest@dtpgroup.co.uk
DTP Group, Leeds UK
Being a trusted advisor to our customers means making sure what we recommend, configure and install works as expected. That’s why here at DTP, we recommend Marvell® QLogic® Fibre Channel and Marvell FastLinQ® Ethernet I/O for all our server and storage connectivity solutions.
The DTP Group has over 30 years of experience in delivering technology and solutions to our customers. We only recommend HPE for server, storage and networking solutions because we know the technology, get great support from the HPE organization and trust the HPE brand. Within the HPE portfolio, there are technology choices that need to be considered, especially when it comes to I/O connectivity. HPE makes a variety Ethernet and Fibre Channel connectivity options available that they source from several different manufacturers. After evaluating all of them, the DTP PreSales and Technical teams have chosen to standardize on 10/25/50GbE based on Marvell FastLinQ technology and on 16GFC and 32GFC based on Marvell QLogic technology.
By Todd Owens, Field Marketing Director, Marvell
Innovation can come in many forms. Sometimes it’s with a completely new technology, sometimes with updating an existing product and in other cases, just by changing the approach to how you acquire and deliver a product. It is the latter that is the latest innovation from Hewlett Packard Enterprise (HPE) when it comes to I/O connectivity.
In conjunction with the launch of HPE ProLiant and Apollo Gen10 Plus servers, the HPE Server I/O Options team developed a new approach for sourcing and qualifying Ethernet adapters for these servers. Deploying what they call Industry Standard Adapters, HPE can now better meet the needs of their end customers with an increased number of options when it comes to firmware and driver updates for their Ethernet adapters in HPE servers.
Traditionally, HPE would source I/O technology from OEM suppliers such as Marvell, create customer model numbers and specifications for adapters and make firmware and drives only available from HPE. These “custom” adapters were often referred to as HPE-optimized. Starting with Gen10 Plus servers, HPE is eliminating the customization and using standard adapters that can work not only in HPE servers, but in others as well. Hence the term “Industry Standard Adapters.”
Marvell is glad to be a strategic partner of HPE, providing a wide variety of Marvell® FastLinQ® adapters that are fully qualified and supported by HPE on the HPE ProLiant Gen10 Plus servers. Below are the current offerings from Marvell for HPE Gen10 Plus servers.
HPE Part Number | Model Name | Product Description |
P08437-B21 | QL41132HLRJ | HPE Ethernet 10Gb 2-port BASE-T QL41132HLRJ Adapter |
P10103-B21 | QL41132HQRJ | HPE Ethernet 10Gb 2-port BASE-T QL41132HQRJ OCP3 Adapter |
P21933-B21 | QL41132HLCU | HPE Ethernet 10Gb 2-port SFP+ QL41132HLCU Adapter |
P08452-B21 | QL41132HQCU | HPE Ethernet 10Gb 2-port SFP+ QL41132HQCU OCP3 Adapter |
P10094-B21 | QL41134HLCU | HPE Ethernet 10GbE 4-port SFP+ QL41134HLCU Adapter |
P22702-B21 | QL41232HLCU | HPE Ethernet 10/25Gb 2-port SFP28 QL41232HLCU Adapter |
P10118-B21 | QL41232HQCU | HPE Ethernet 10/25Gb 2-port SFP28 QL41232HQCU OCP3 Adapter |
With the new approach, HPE Gen10 Plus customers can see the Marvell model numbers in the HPE product description and identify the Marvell vendor and product IDs at server boot. Then they will be directed to Marvell for detailed specifications, user guides, technical briefs and even firmware and/or driver downloads. The support model will not change and HPE will continue to provide level 1 - 3 support. The new approach will benefit HPE customers in a number of ways:
For those HPE ProLiant customers who prefer utilizing HPE-specific deployment software and utilities like HPE System Insight Manager (SIM), System Update Manager (SUM) or Service Pack for ProLiant (SPP), HPE will also make firmware and drivers available through their normal quarterly processes.
The Marvell portfolio for HPE ProLiant Gen10 Plus includes PCIe and OCP 3.0 form factor adapters in 1/10GBASE-T, 10Gb SFP+ and 10/25GbE SFP28 variants. All these adapters support Marvell’s industry-leading list of features and capabilities, including:
For more details on Marvell’s FastLinQ adapters, download the family product brief here.
For more information on Universal RDMA, SmartAN technology and other unique Marvell FastLinQ capabilities, visit our Follow the Wire Video library
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Today marks the close of the acquisition of Avera Semi.
Avera brings over two decades of expertise developing custom ASIC solutions for the infrastructure market, further enabling Marvell to offer a full suite of leading semiconductor solutions. With this acquisition, Marvell will provide the complete spectrum of product architectures spanning standard, semi-custom to full ASIC solutions. We are proud to offer world class custom ASIC design services to our OEM partners.
To learn more, read our latest press release:
https://www.marvell.com/company/news/pressDetail.do?releaseID=11497.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Marvell today announced that it has successfully completed its acquisition of Aquantia.
Aquantia pioneered Multi-Gig technology – now the basis for high speed networking in a broad range of applications from enterprise campuses to autonomous cars. Their portfolio complements Marvell’s industry-leading PHYs, switches and processors, creating an unparalleled networking platform and enabling customers to develop systems that span megabits to terabits per second. To learn more, read our latest press release https://www.marvell.com/company/news/pressDetail.do?releaseID=11257.
By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell
Wi-Fi Alliance® the industry alliance responsible for driving certification efforts worldwide to ensure interoperability and standards for Wi-Fi® devices, today announced Wi-Fi CERTIFIED 6™, the industry certification program based on the IEEE 802.11ax standard. Marvell’s 88W9064 (4x4) and 88W9068 (8x8) Wi-Fi 6 solutions are among the first to be Wi-Fi 6 certified and have been selected to be included in the Wi-Fi Alliance interoperability test bed.
Wi-Fi CERTIFIED 6™ ensures interoperability and an improved user experience across all devices running IEEE 802.11ax technology. Wi-Fi 6 benefits both the 5 and 2.4 GHz bands, incorporating major fundamental enhancements like Multi-User MIMO, OFDMA, 1024-QAM, BSS coloring and Target Wait Time. Wi-Fi 6 delivers faster speeds with low latency, high network utilization, and power saving technologies that provide substantial benefits spanning all the way from high density enterprises to enabling battery operated low power IoT devices.
Marvell played a leading role in shaping Wi-Fi 6 and enabling Wi-Fi CERTIFIED 6 to ensure seamless interoperability and drive rapid adoption in the market place. Wi-Fi Alliance forecasts that over 1.6 billion devices supporting Wi-Fi 6 will be shipped worldwide by 2020. Marvell is at the forefront of this wave enabling our Wi-Fi CERTIFIED 6 products to be designed into exciting new products spanning infrastructure access, premium client and automotive markets.
For more information, you can visit www.marvell.com/wireless.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Monday, August 26, marked the anniversary of the Nineteenth Amendment being signed into law and granting American women the constitutional right to vote. To commemorate the impending 100-year celebration, the Silicon Valley Leadership Group kicked off a series of organized events meant to honor women trailblazers and host a call to women leaders from all walks of life. Built into our core values around diversity and equality, empowering women to be leaders is important to Marvell and its executive leadership team, and the company was a proud sponsor of Monday’s event.
The day-long series of panel discussions celebrating the achievements of women since the suffrage movement took place at NASA’s Ames Research Center in Mountain View and was filled with some of the most accomplished women in Silicon Valley and the country.
Monday’s event featuring House Speaker Nancy Pelosi, the highest-ranking woman in U.S. politics and second in line to the presidency, Anna Eshoo, the Democratic congresswoman, whose district includes Moffett Field, and NASA astronaut, Megan McArthur, focused more broadly on women leadership and the state of gender equality in business.
The day included a panel called Women in Innovation and another named Women Leaders Across Generations, each moderated and filled with some of the top women at local and national companies like Silicon Valley Bank, Ripple, eBay, AT&T, Genentech, NASA, Marvell, Micron, and the San Francisco 49ers. Marvell’s Chief Compliance Officer, Regan MacPherson, moderated the latter during which science and tech executives used their own experiences to frame a conversation on how far women have come and how far they still must go.
By Kumar Sankaran, Senior Director, Product Management, Marvell
Marvell has been selected by IT Brand Pulse’s 2019 IT pro voting as the leader in Arm®-based server CPUs. In a clean sweep across all categories, Marvell’s ThunderX® was voted as the leader in market, price, performance, reliability, service and support, and innovation. The results are based on an independent, non-sponsored survey given to IT professionals on server products. The survey is conducted once a year by IT Brand Pulse, a trusted source for research, data and analysis about data center infrastructure.
Marvell® ThunderX® processors, based on the Armv8-A architecture, bring industry-leading compute and memory performance as well as technology innovation backed by a rich ecosystem of more than 70+ partners. As the most widely supported and deployed Arm-based server processor in the world, Marvell ThunderX processors power High-Performance Computing, Cloud and Edge applications.
By Todd Owens, Field Marketing Director, Marvell
Data is the new currency for many businesses today. The ability to access, analyze and act on data has become a competitive advantage for many companies. While much attention is paid to the storage devices and compute required to optimize data processing, the I/O infrastructure is often overlooked. The reality is that I/O technologies are just as important as the core count of the CPU or storage capacity and latency of the storage array. The time is now to future-proof your network and that’s where Marvell is here to help.
Marvell has been a long-time Hewlett Packard Enterprise (HPE) supplier of I/O technology used in the HPE ProLiant, Apollo, HPE Synergy, and HPE Storage offerings. Over the past year, a new generation of Ethernet I/O has begun making its way into these HPE platforms. Based on Marvell® FastLinQ® QL41000 and QL45000 Ethernet technology, these new adapters are allowing HPE customers to “future-proof their network” connectivity for what’s to come in the near future of data centers.
The QL41000 and QL45000 adapter technology provides several new capabilities not found in other I/O offerings for HPE. Advancements include:
These are in addition to enhanced DPDK performance (up to 36Mpps bi-directional) and support for SR-IOV, TCP/IP stateless offloads, IEEE 1588 time stamping and more.
The FastLinQ 41000 series technology can be found in next-generation Flexible LOM Rack (FLR) and standup PCIe adapters for HPE ProLiant and Apollo servers. Models include:
These adapters allow HPE Server customers to future-proof their Rack and Tower servers with RDMA for use in Hyper-Converged Infrastructure (HCI) and Software Defined Storage (SDS) solutions; and make the transition from 10GbE to 25GbE connectivity seamless at the server. These I/O devices are ideal for customers considering Microsoft Azure Stack HCI or VMware vSAN environments, or the deployment of any latency sensitive application.
The FastLinQ 45000 series technology can be found in next-generation mezzanine adapters for HPE Synergy, including:
With Universal RDMA support improved DPDK performance and high-bandwidth capability, these adapters are ideal for customers with VMware ESXi or Microsoft Hyper-V deployments, and for Telco or high-frequency trading applications.
Since many applications today will start to require more I/O performance and low latency RDMA, HPE’s next-gen Ethernet adapters will go a long way in future proofing networking connectivity for server customers.
For a complete list of Marvell FastLinQ Ethernet adapters for HPE Servers and the features they support, download our HPE FastLinQ Ethernet Quick Reference guide. If you would like to discuss I/O technology or customer needs in more detail, contact our HPE team . You can also visit the Marvell HPE microsite at www.marvell.com/hpe.
By Larry Wikelius, Vice President, Ecosystem and Partner Enabling, Marvell
ISC High Performance, which just wrapped up today in Frankfurt, Germany, is one of the most significant server events of the year and is often a catalyst for key major industry announcements. This year’s event was no exception with NVIDIA announcing its support for servers based on the Arm architecture. With this move, NVIDIA will make its full stack of AI and high-performance computing software available to the Arm ecosystem by the end of 2019. The stack includes all NVIDIA CUDA-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools such as PGI compilers with OpenACC support and profilers. NVIDIA’s full software suite support will enable the acceleration of more than 600 HPC applications and AI frameworks on Marvell® ThunderX2® systems.
NVIDIA’s support for Arm CPUs marks continued growth of the Arm-based server ecosystem. Marvell has been a leading driver in the establishment of a standard, complete and competitive ecosystem around the Arm architecture ranging from low level firmware through system software to commercial ISV applications. The Marvell ThunderX2 processor is the most widely deployed Arm server in the market today and the only Arm server on the prestigious top 500 super computer list with the Astra system at Sandia National Laboratories.
NVIDIA’s announcement underscores the growing momentum of Marvell ThunderX2 in both high-performance computing and cloud deployments. The entire industry is very excited about the ability to combine the computational performance and memory bandwidth of ThunderX2 with the parallel processing capabilities of the GPU. NVIDIA’s commitment to the complete software stack is particularly important and is yet another high value solution option in the broadly supported software offering on ThunderX2. Most ThunderX2 systems have been designed with GPU support in mind from the beginning which enables a simple upgrade for today’s installed base.
Marvell welcomes NVIDIA to the ThunderX2 ecosystem and we look forward to working with customers on this exciting server solution. See the press release here.
Read more about what the industry is saying about the announcement, and what this means to high-performance computing:
Forbes – NVIDIA Gives Arm a Boost In AI And HPC
The Next Platform - Nvidia Makes Arm A Peer To X86 And Power For GPU Acceleration
HPCwire - Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
Marvell President and CEO Matt Murphy joined Jim Cramer, host of the widely acclaimed finance television program, Mad Money, for an engaging discussion on where Marvell has been and where the company is headed.
Jim was interested in learning more about the four deals that Marvell announced over the last month including the acquisitions of Aquantia and Avera, and the 5G growth opportunities moving forward.
In his interview with Jim, Matt highlighted the key technologies needed to play in the infrastructure market—processors, networking, storage and security—all of which Marvell has. Aquantia helps Marvell strengthen its move to the connected car where a best-in-class network is needed to address the trends in autonomous vehicles, electrification, and safety/security, and the shift from analog interfaces to Ethernet technology.
With the acquisition of Avera, Marvell essentially double downs on 5G, Matt explains. He emphasizes that the 5G cycle is just beginning —“not even in the first inning yet”— with the build-out of this infrastructure starting now and continuing robustly through 2020. Avera’s biggest end market exposure is base stations and Avera will enable a new custom chip design business for Marvell.
To hear more from Matt and Jim’s discussion, and learn about how Marvell is at the infrastructure epicenter to address 5G, the cloud, AI, enterprise hardware and the connected car, click the below video.
Marvell Technology CEO: We are 'extremely well positioned' for 5G from CNBC.
By Stacey Keegan, Vice President, Corporate Marketing, Marvell
The Marvell Finance team, in partnership with Deloitte, has undergone a significant transformation over the past three years to ensure the delivery of error-free financial reporting and analytics for the business. After successfully completing another 10Q filing on June 7, the Marvell Finance team and Deloitte celebrated by giving back to the local community and volunteering at Salvation Army. Proceeds from the donations sold in Salvation Army Family Stores are used to fund the Salvation Army’s Substance Abuse program in downtown San Jose. The program provides housing, work preparedness and rehabilitation free of charge to registered participants.
This event demonstrates Marvell’s commitment to enriching the communities where we live and work. Well done, team.
By Regan MacPherson, Chief Compliance Officer
Women today comprise 47% of the overall workforce; however, only 15% choose engineering. As part of Marvell’s commitment to diversity and inclusion in the workplace, the company is proud to support the GSA Women’s Leadership Initiative (WLI) to make an impact for women in STEM moving forward.
The GSA WLI seeks to significantly grow the number of women entering the semiconductor industry and increase the number of women on boards and in leadership positions.
As part of the initiative, which was announced yesterday, the GSA has established the WLI Council that will create and implement programs and projects towards meeting the WLI objectives. WLI Council harnesses the leadership of women who have risen to the top ranks of the semiconductor industry. Marvell’s own chief financial officer, Jean Hu, alongside 16 other women executives, will utilize their experiences to provide inspiration for and sponsorship of the next generation of female leaders.
“I am honored to be amongst a highly talented and diverse group of women at GSA WLI Council to help ensure that women are an integral part of the leadership of the semiconductor industry,” said Jean Hu, CFO of Marvell. “Marvell and GSA share a vision to elevate the women in STEM and support female entrepreneurs in their efforts to succeed in the tech industry.”
For more information on the GSA WLI, please visit https://www.gsaglobal.org/womens-leadership/. You can also join the Leadership group on LinkedIn to get involved.
By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell
A bit of validation once in a while is good for all of us - that’s pretty true whether you are the one providing it or, conversely, the one receiving it. Most of the time it seems to be me that is giving out validation rather than getting it. Like the other day when my wife tried on a new dress and asked me, “How do I look?” Now, of course, we all know there is only one way to answer a question like that - if they want to avoid sleeping on the couch at least.
Recently, the Marvell team received some well-deserved validation for its efforts. The FastLinQ 45000/41000 high performance Ethernet Network Interface Controllers (NICs) series that we supply to the industry, which support 10/25/50/100GbE operation, are now fully qualified by Red Hat for Fast Data Path (FDP) 19.B.
Figure 1: The FastLinQ 45000 and 41000 Ethernet Adapter Series from Marvell
Red Hat FDP is employed in an extensive array of the products found within the Red Hat portfolio - such as the Red Hat OpenStack Platform (RHOSP), as well as the Red Hat OpenShift Container Platform and Red Hat Virtualization (RHV). Having FDP-qualification means that FastLinQ can now address a far broader scope of the open-source Software Defined Networking (SDN) use cases - including Open vSwitch (OVS), Open vSwitch with the Data Plane Development Kit (OVS-DPDK), Single Root Input/Output Virtualization (SR-IOV) and Network Functions Virtualization (NFV). The engineers at Marvell worked closely with our counterparts at Red Hat on this project, in order to ensure that the FastLinQ feature set would operate in conjunction with the FDP production channel. This involved many hours of complex, in-depth testing. By being FDP 19.B qualified, Marvell FastLinQ Ethernet Adapters can enable seamless SDN deployments with RHOSP 14, RHEL 8.0, RHEV 4.3 and OpenShift 3.11.
Being widely recognized as the data networking ‘Swiss Army Knife,’ our FastLinQ 45000/41000 Ethernet adapters benefit from a highly flexible programmable architecture. This architecture is capable of delivering up to 68 million small packet per second performance levels, plus 240 SR-IOV virtual functions and supports tunneling while maintaining stateless offloads. As a result, customers have the hardware they need to seamlessly implement and manage even the most challenging of network workloads in what is becoming an increasingly virtualized landscape. Supporting Universal RDMA (concurrent RoCE, RoCEv2 and iWARP operation), unlike most competing NICs, they offer a highly scalable and flexible solution. Learn more here.
Validation feels good. Thank you to the RedHat and Marvell team!
By George Hervey, Principal Architect, Marvell
Though established, mega-scale cloud data center architectures were adequately able to support global data demands for many years, there is a fundamental change taking place. Emerging 5G, industrial automation, smart cities and autonomous cars are driving the need for data to be directly accessible at the network edge. New architectures are needed in the data center to support these new requirements including reduced power consumption, low latency and smaller footprints, as well as composable infrastructure.
Composability provides a disaggregation of data storage resources to bring a more flexible and efficient platform for data center requirements to be met. But it does, of course, need cutting-edge switch solutions to support it. Capable of running at 12.8Tbps, the Marvell® Prestera® CX 8500 Ethernet switch portfolio has two key innovations that are set to redefine data center architectures: Forwarding Architecture using Slices of Terabit Ethernet Routers (FASTER) technology and Storage Aware Flow Engine (SAFE) technology.
With FASTER and SAFE technologies, the Marvell Prestera CX 8500 family can reduce overall network costs by more than 50%; lower power, space and latency; and determine exactly where congestion issues are occurring by providing complete per flow visibility.
View the video below to learn more about how Marvell Prestera CX 8500 devices represent a revolutionary approach to data center architectures.
By Todd Owens, Field Marketing Director, Marvell
Today, Remote Direct Memory Access (RDMA) is primarily being utilized within high performance computing or cloud environments to reduce latency across the network. Enterprise customers will soon require low latency networking that RDMA offers so that they can address a variety of different applications, such as Oracle and SAP, and also implement software-defined storage using Windows Storage Spaces Direct (S2D) or VMware vSAN. There are three protocols that can be used in RDMA deployment: RDMA over InfiniBand, RDMA over Converged Ethernet (RoCE), and RDMA over iWARP. Given that there are several possible routes to go down, how do you ensure you pick the right protocol for your specific tasks?
In the enterprise sector, Ethernet is by far the most popular transport technology. Consequently, we can ignore the InfiniBand option, as it would require a forklift upgrade to the I/O existing infrastructure - thus making it way too costly for the vast majority of enterprise data centers. So, that just leaves RoCE and iWARP. Both can provide low latency connectivity over Ethernet networks. But which is right for you?
Let’s start by looking at the fundamental differences between these two protocols. RoCE is the most popular of the two and is already being used by many cloud hyper-scale customers worldwide. RDMA enabled adapters running RoCE are available from a variety of vendors including Marvell.
RoCE provides latency at the adapter in the 1-5us range but requires a lossless Ethernet network to achieve low latency operation. This means that the Ethernet switches integrated into the network must support data center bridging and priority flow control mechanisms so that lossless traffic is maintained. It is likely they will therefore have to be reconfigured to use RoCE. The challenge with the lossless or converged Ethernet environment is that configuration is a complex process and scalability can be very limited in a modern enterprise context. Now it is not impossible to use RoCE at scale but to do so requires the implementation of additional traffic congestion control mechanisms, like Data Center Quantized Congestion Notification (DCQCN), which in turn calls for large, highly-experienced teams of network engineers and administrators. Though this is something that hyper-scale customers have access to, not all enterprise customers can say the same. Their human resources and financial budgets can be more limited.
Going back through the history of converged Ethernet environments, one must look no further than Fibre Channel over Converged Ethernet (FCoE) to see the size of the challenge involved. Five years ago, many analysts and industry experts claimed FCoE would replace Fibre Channel in the data center. That simply didn’t happen because of the complexity associated with using converged Ethernet networks at scale. FCoE still survives, but only in closed environments like HPE BladeSystem or HPE Synergy servers, where the network properties and scale are carefully controlled. These are single-hop environments with only a few connections in each system.
Finally, we come to iWARP. This came on the scene after RoCE and has the advantage of running on today’s standard TCP/IP networks. It provides latency at the adapter in the range of 10-15us. This is higher than what one can achieve by implementing RoCE but is still orders of magnitude below that of standard Ethernet adapters.
They say, if all you have is a hammer, then everything looks like a nail. That’s the same when it comes to vendors touting their RDMA-enabled adapters. Most vendors only support one protocol, so of course that is the protocol they will recommend. Here at Marvell, we are unique in that with our Universal RDMA technology, a customer can use both RoCE and iWARP on the same adapter. This gives us more credibility when making recommendations and means that we are effectively protocol agnostic. It is really important from a customer standpoint, as it means that we look at what is the best fit for their application criteria.
So which RDMA protocol do you use when? Well, when latency is the number one criteria and scalability is not a concern, the choice should be RoCE. You will see RoCE implemented as the back-end network in modern disk arrays, between the controller node and NVMe drives. You will also find RoCE deployed within a rack or where there is only one or two top-of-rack switches and subnets to contend with. Conversely, when latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate. It runs on the existing network infrastructure and can easily scale between racks and even long distances across data centers. A great use case for iWARP is as the network connectivity option for Microsoft Storage Spaces Direct implementations.
The good news for enterprise customers is that several Marvell® FastLinQ® Ethernet Adapters from HPE support Universal RDMA, so they can take advantage of low latency RDMA in the way that best suits them. Here’s a list of HPE Ethernet adapters that currently support both RoCE and iWARP RDMA. With RDMA-enabled adapters for HPE ProLiant, Apollo, HPE Synergy and HPE Cloudline servers, Marvell has a strong portfolio of 10Gb or 25GbE connectivity solutions for data centers. In addition to supporting low latency RDMA, these adapters are also NVMe-ready. This means they can accommodate NVMe over Ethernet fabrics running RoCE or iWARP, as well as supporting NVMe over TCP (with no RDMA). They are a great choice for future-proofing the data center today for the workloads of tomorrow.
For more information on these and other Marvell I/O technologies for HPE, go to www.marvell.com/hpe.
If you’d like to talk with one of our I/O experts in the field, you’ll find contact info here.
By George Hervey, Principal Architect, Marvell
The data center networking landscape is set to change dramatically. More adaptive and operationally efficient composable infrastructure will soon start to see significant uptake, supplanting the traditional inflexible, siloed data center arrangements of the past and ultimately leading to universal adoption.
Composable infrastructure takes a modern software-defined approach to data center implementations. This means that rather than having to build dedicated storage area networks (SANs), a more versatile architecture can be employed, through utilization of NMVe and NVMe-over-Fabric protocols.
Whereas previously data centers had separate resources for each key task, composable infrastructure enables compute, storage and networking capacity to be pooled together, with each function being accessible via a single unified fabric. This brings far greater operational efficiency levels, with better allocation of available resources and less risk of over provisioning --- critical as edge data centers are introduced to the network, offering solutions for different workload demands.
Composable infrastructure will be highly advantageous to the next wave of data center implementations though the increased degree of abstraction that comes along presents certain challenges --- these are mainly in terms of dealing with acute network congestion --- especially in relation to multiple host scenarios. Serious congestion issues can occur, for example, when there are several hosts attempting to retrieve data from a particular part of the storage resource simultaneously. Such problems will be exacerbated in larger scale deployments, where there are several network layers that need to be considered and the degree of visibility is thus more restricted.
There is a pressing need for a more innovative approach to data center orchestration. A major streamlining of the network architecture will be required to support the move to composable infrastructure, with fewer network layers involved, thereby enabling greater transparency and resulting in less congestion.
This new approach will simplify data center implementations, thus requiring less investment in expensive hardware, while at the same time offering greatly reduced latency levels and power consumption.
Further, the integration of advanced analytical mechanisms is certain to be of huge value here as well --- helping with more effective network management and facilitating network diagnostic activities. Storage and compute resources will be better allocated to where there is the greatest need. Stranded capacity will no longer be a heavy financial burden.
Through the application of a more optimized architecture, data centers will be able to fully embrace the migration to composable infrastructure. Network managers will have a much better understanding of what is happening right down at the flow level, so that appropriate responses can be deployed in a timely manner. Future investments will be directed to the right locations, optimizing system utilization.
By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell
Whether it is the aesthetics of the iPhone or a work of art like Monet’s ‘Water Lillies’, simplicity is often a very attractive trait. I hear this resonate in everyday examples from my own life - with my boss at work, whose mantra is “make it simple”, and my wife of 15 years telling my teenage daughter “beauty lies in simplicity”. For the record, both of these statements generally fall upon deaf ears.
The Non-Volatile Memory over PCIe Express (NVMe) technology that is now driving the progression of data storage is another place where the value of simplicity is starting to be recognized. In particular with the advent of the NVMe-over-Fabrics (NVMe-oF) topology that is just about to start seeing deployment. The simplest and most trusted of Ethernet fabrics, namely Transmission Control Protocol (TCP), has now been confirmed as an approved NVMe-oF standard by the NVMe Group[1].
Figure 1: All the NVMe fabrics currently available
Just to give a bit of background information here, NVMe basically enables the efficient utilization of flash-based Solid State Drives (SSDs) by accessing it over a high-speed interface, like PCIe, and using a streamlined command set that is specifically designed for flash implementations. Now, by definition, NVMe is limited to the confines of a single server, which presents a challenge when looking to scale out NVMe and access it from any element within the data center. This is where NVMe-oF comes in. All Flash Arrays (AFAs), Just a Bunch of Flash (JBOF) or Fabric-Attached Bunch of Flash (FBOF) and Software Defined Storage (SDS) architectures will each be able to incorporate a front end that has NVMe-oF connectivity as its foundation. As a result, the effectiveness with which servers, clients and applications are able to access external storage resources will be significantly enhanced.
A series of ‘fabrics’ have now emerged for scaling out NVMe. The first of these being Ethernet Remote Direct Memory Access (RDMA) - in both its RDMA over Converged Ethernet (RoCE) and Internet Wide-Area RDMA Protocol (iWARP) derivatives. It has been followed soon after by NVMe-over-Fiber Channel (FC-NVMe), and then ones based on FCoE, Infiniband and OmniPath.
But with so many fabric options already out there, why is it necessary to come up with another one? Do we really need NVMe-over-TCP (NVMe/TCP) too? Well RDMA (whether it is RoCE or iWARP) based NVMe fabrics are supposed to deliver the extremely low level latency that NVMe requires via a myriad of different technologies - like zero copy and kernel bypass - driven by specialized Network Interface Controller (NICs). However, there are several factors which hamper this, and these need to be taken into account.
Unlike any other NVMe fabric, the pervasiveness of TCP is huge - it is absolutely everywhere. TCP/IP is the fundamental foundation of the Internet, every single Ethernet NIC/network out there supports the TCP protocol. With TCP, availability and reliability are just not issues to that need to be worried about. Extending the scale of NVMe over a TCP fabric seems like the logical thing to do.
NVMe/TCP is fast (especially if using Marvell FastLinQ 10/25/50/100GbE NICs – as they have a build-in full offload for NVMe/TCP), it leverages existing infrastructure and keeps things inherently simple. That is beautiful prospect for any technologist and is also attractive to company CIOs worried about budgets too.
So, once again, simplicity wins in the long run!
[1] https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/
By Sree Durbha
Today, we are at the peak of technology product availability with the releases of the new iPhone models, Alexa enabled devices and more. In the coming days, there will be numerous international consumer OEMs preparing new offerings as we approach the holiday selling season. Along with the smartphones, voice assistant enabled smart speakers and deep learning wireless security cameras, many devices and appliances are increasingly geared toward automating the home, the office and the factory. These devices are powered by application microcontroller units (MCUs) with embedded wireless connectivity to help users to remotely control and operate them via phone apps, voice or even through mere presence. This is part of an industry trend of pushing intelligence into everyday things. According to analyst firm Techno Systems Research1, this chipset market grew by more than 60% over the course of the last year and is likely to continue this high rate of growth. The democratization of wireless connectivity intellectual property and the continuing shift of semiconductor design and development to low cost regions is helping give rise to new industry players. In order to help customers differentiate in this highly competitive market, Marvell has announced the 88MW320/322 low-power Wi-Fi microcontroller SoC. This chipset is 100% pin-compatible and software compatible with the existing 88MW300/302 based designs. Although the newly released microcontroller is cost-optimized, there are several key hardware and software enhancements in this chipset. Support for extended industrial temperature operation, from -400 C through to 1050 C has been added. Unlike its predecessor, the 88MW320/322 can be implemented into more challenging application areas - such as LED lighting and industrial automation. No RF specific changes have been made within the silicon, so the minimum and maximum RF performance parameters remain the same as before. However, other fixes made have helped improve typical RF performance as reported by some of our customers when evaluating samples. Since there was no change in form, fit or function, the external RF interface remains the same as well. This enables customers to leverage existing 88MW300/302 module and device level regulatory certification on 88MW320/322. A hardware security feature has also been incorporated that allows customers to uniquely tie the chipset to the firmware running on it. This helps prevent counterfeit software to run on the chipset. This chipset is supported by the industry-leading Marvell EZ-Connect SDK for Apple’s new Advanced Development Kit (ADK) and Release 13 HomeKit Accessory Protocol SDK (R13 HAPSDK) with software-based authentication (SoftAuth), Amazon’s AWS IoT and other third-party cloud platforms. The Apple SoftAuth support now allows customers to avoid the cost and hassle of adding the MFi authentication chip, which was previously required to get HomeKit certification. On the applications side, we have added support for the Alexa Voice Services library. With MP3 decoder and OAUTH2 modules integrated on our SDK, our solution now allows customers to add an external audio-codec chipset to offer native voice command translation for basic product control functions. As previously announced, we continue to partner with Dialog Semiconductor to offer support for BLE with shared antenna managed coexistence software with our Wi-Fi on 88MW320/322. Several of our module vendor partners have announced support for this chipset in standalone and Wi-Fi + BLE combo configurations. You can find a complete list of modules supporting this chipset on the Marvell Wireless Microcontrollers page. The 88MW320/322 has been sampling to customers for a few months now and is currently shipping. The product comes in 68-pin QFN package (88MW320) and 88-pin QFN package (88MW322) formats. It is available in commercial, extended, industrial and extended industrial temperature ranges in both tray and tape and reel configurations. Watch this space for future announcements as we extend the availability of Marvell’s solutions for the smart home, office and factory to our customers through our catalog partners. The goal is to enable our wireless microcontroller solutions with easy to install one-click software that allows smaller customers to use our partner reference designs to develop their form factor proof of concept designs with hardware, firmware, middleware, cloud connectivity software, collateral and application support from a single source. This will free up their resources so that they can focus on what is most important to them - which is to work on application software and differentiation. The best is yet to come. As the industry demands solutions with higher levels of integration at ever lower power to allow for wireless products with several months and even years of battery life, you can count on Marvell to innovate to help meet customer needs. For example, the 802.11ax standard specification is not just for high efficiency and high throughput designs, it also offers provisions for low power, long battery life designs. 20MHz only channel operation in the 5GHz band and features such as target wake time (TWT), which helps extend the sleep cycle of devices; dual sub-carrier modulation (DCM), which helps extend the wireless range; uplink and downlink OFDMA, all contribute to make the next generation of devices worth waiting for.
1. 2017 Wireless Connectivity Market Analysis, August, 2018 ↩
By Marvell PR Team
Marvell shared its mission and focus on driving the core technology to enable the global network infrastructure at its recent investor day. This was followed up with an appearance at Nasdaq, where Matt Murphy, president and CEO of the company, rang the bell to open the stock exchange.
At both of these events in New York City, Marvell shared how far the company has come, where it was going, and reaffirmed its mission: To provide semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else.
The world has become more connected and intelligent than ever, and the global network has also evolved at an astonishing rate. It’s imperative that the semiconductor industry advances even quicker to keep up with these new technology trends and stay relevant. Marvell recognizes that its customers, at the core or on the edge, face the daunting challenge of delivering solutions for this ever-changing world – today.
With both the breadth and depth of technology expertise, Marvell offers the critical technology elements — storage, Ethernet, Arm® processors, security processors and wireless connectivity — to drive innovation in the industry. With the Cavium acquisition, the company retains its strong and stable foothold while competing more aggressively and innovating faster to serve customers better.
For Marvell the future isn’t a distant challenge: it is here with us now, evolving at an accelerated pace. Marvell is enabling new technologies such as 5G, disrupting new Flash platform solutions for the data center, revolutionizing the in-car network, and developing new compute architectures for artificial intelligence, to name a few.
Bringing the most complete infrastructure portfolio of any semiconductor company, Marvell is more than ready to continue on its amazing journey, and have its customers and partners alongside it on the cutting-edge—today, tomorrow and beyond.
By Todd Owens, Field Marketing Director, Marvell
Converging network and storage I/O onto a single wire can drive significant cost reductions in the small to mid-size data center by reducing the number of connections required. Fewer adapter ports means fewer cables, optics and switch ports consumed, all of which reduce OPEX in the data center. Customers can take advantage of converged I/O by deploying Converged Network Adapters (CNA) that provide not only networking connectivity, but also provide storage offloads for iSCSI and FCoE as well.
Just recently, HPE has introduced two new CNAs based on Marvell® FastLinQ® 41000 Series technology. The HPE StoreFabric CN1200R 10GBASE-T Converged Network Adapter and HPE StoreFabric CN1300R 10/25Gb Converged Network Adapter are the latest additions in HPE’s CNA portfolio. These are the only HPE StoreFabric CNAs to also support Remote Direct Memory Access (RDMA) technology (concurrently with storage offloads).
As we all know, the amount of data being generated continues to increase and that data needs to be stored somewhere. Recently, we are seeing an increase in the number of iSCSI connected storage devices in mid-market, branch and campus environments. iSCSI is great for these environments because it is easy to deploy, it can run on standard Ethernet, and there are a variety of new iSCSI storage offerings available, like Nimble and MSA all flash storage arrays (AFAs).
One challenge with iSCSI is the load it puts on the Server CPU for storage traffic processing when using software initiators – a common approach to storage connectivity. To combat this, Storage Administrators can turn to CNAs with full iSCSI protocol offload. Offloading transfers the burden of processing the storage I/O from the CPU to the adapter. Figure 1: Benefits of Adapter Offloads
As Figure 1 shows, Marvell driven testing shows that CPU utilization using H/W offload in FastLinQ 10/25GbE adapters can reduce CPU utilization by as much as 50% compared to an Ethernet NIC with software initiators. This means less burden on the CPU, allowing you to add more virtual machines per server and potentially reducing the number of physical servers required. A small item like an intelligent I/O adapter from Marvell can provide a significant TCO savings.
Another challenge has been the latency associated with Ethernet connectivity. This can now be addressed with RDMA technology. iWARP, RDMA over Converged Ethernet (RoCE) and iSCSI over Ethernet with RDMA (iSER) all allow for I/O transactions to be performed directly from the memory to the adapter, bypassing the software kernel in the user space of the O/S. This speeds transactions and reduces the overall I/O latency. The result is better performance and faster applications.
The new HPE StoreFabric CNAs become the ideal devices for converging network and iSCSI storage traffic for HPE ProLiant and Apollo customers. The HPE StoreFabric CN1300R 10/25GbE CNA supports plenty of bandwidth that can be allocated to both the network and storage traffic. In addition, with support for Universal RDMA (support for both iWARP and RoCE) as well as iSER, this adapter provides significantly lower latency than standard network adapters for both the network and storage traffic.
The HPE StoreFabric 1300R also supports a technology Marvell calls SmartAN™, which allows the adapter to automatically configure itself when transitioning between 10GbE and 25GbE networks. This is key because at 25GbE speeds, Forward Error Correction (FEC) can be required, depending on the cabling used. To make things more complex, there are two different types of FEC that can be implemented. To eliminate all the complexity, SmartAN automatically configures the adapter to match the FEC, cabling and switch settings for either 10GbE or 25GbE connections, with no user intervention required.
When budget is the key concern, the HPE StoreFabric CN1200R is the perfect choice. Supporting 10GBASE-T connectivity, this adapter connects to existing CAT6A copper cabling using RJ-45 connections. This eliminates the need for more expensive DAC cables or optical transceivers. The StoreFabric CN1200R also supports RDMA protocols (iWARP, RoCE and iSER) for lower overall latency.
Why converge though? It’s all about a tradeoff between cost and performance. If we do the math to compare the cost of deploying separate LAN and storage networks versus a converged network, we can see that converging I/O greatly reduces the complexity of the infrastructure and can reduce acquisition costs by half. There are additional long-term cost savings also, associated with managing one network versus two. Figure 2: Eight Server Network Infrastructure Comparison
In this pricing scenario, we are looking at eight servers connecting to separate LAN and SAN environments versus connecting to a single converged environment as shown in figure 2. Table 1: LAN/SAN versus Converged Infrastructure Price Comparison
The converged environment price is 55% lower than the separate network approach. The downside is the potential storage performance impact of moving from a Fibre Channel SAN in the separate network environment to a converged iSCSI environment. The iSCSI performance can be increased by implementing a lossless Ethernet environment using Data Center Bridging and Priority Flow Control along with RoCE RDMA. This does add significant networking complexity but will improve the iSCSI performance by reducing the number of interrupts for storage traffic.
One additional scenario for these new adapters is in Hyper-Converged Infrastructure (HCI) implementations. With HCI, software defined storage is used. This means storage within the servers is shared across the network. Common implementations include Windows Storage Spaces Direct (S2D) and VMware vSAN Ready Node deployments. Both the HPE StoreFabric CN1200R BASE-T and CN1300R 10/25GbE CNAs are certified for use in either of these HCI implementations. Figure 3: FastLinQ Technology Certified for Microsoft WSSD and VMware vSAN Ready Node
In summary, the new CNAs from the HPE StoreFabric group provide high performance, low cost connectivity for converged environments. With support for 10Gb and 25Gb Ethernet bandwidths, iWARP and RoCE RDMA and the ability to automatically negotiate changes between 10GbE and 25GbE connections with SmartAN™ technology, these are the ideal I/O connectivity options for small to mid-size server and storage networks. To get the most out over your server investments, choose Marvell FastLinQ Ethernet I/O technology which is engineered from the start with performance, total cost of ownership, flexibility and scalability in mind.
For more information on converged networking, contact one our HPE experts in the field to talk through your requirements. Just use the HPE Contact Information link on our HPE Microsite at www.marvell.com/hpe.
By Maen Suleiman, Senior Software Product Line Manager, Marvell and Gorka Garcia, Senior Lead Engineer, Marvell Semiconductor, Inc.
Thanks to the respective merits of its ARMADA® and OCTEON TX® multi-core processor offerings, Marvell is in a prime position to address a broad spectrum of demanding applications situated at the edge of the network. These applications can serve a multitude of markets that include small business, industrial and enterprise, and will require special technologies like efficient packet processing, machine learning and connectivity to the cloud. As part of its collaboration with Amazon Web Services® (AWS), Marvell will be illustrating the capabilities of edge computing applications through an exciting new demo that will be shown to attendees at Arm TechCon - which is being held at the San Jose Convention Center, October 16th-18th.
This demo takes the form of an automated parking lot. An ARMADA processor-based Marvell MACCHIATObin® community board, which integrates the AWS Greengrass® software, is used to serve as an edge compute node. The Marvell edge compute node receives video streams from two cameras that are placed at the entry gate and exit of the parking lot. The ARMADA processor-based compute node runs AWS Greengrass Core; executes two Lambda functions to process the incoming video streams and identify the vehicles entering the garage through their license plates; and subsequently checks whether the vehicles are authorized or unauthorized to enter the parking lot.
The first Lambda function will be running Automatic License Plate Recognition (OpenALPR) software and it obtains the license plate number and delivers it together with the gate ID (Entry/Exit) to a Lambda function running on the AWS® cloud that will access a DynamoDB® database. The cloud Lambda function will be responsible for reading the DynamoDB whitelist database and determines if the license plate belongs to an authorized car. This information will be sent back to a second Lambda function on the edge of the network, on the MACCHIATObin board, responsible for managing the parking lot capacity and opening or closing the gate. This Lambda function will be logging the activity in the edge to the AWS Cloud Elasticsearch® service, which works as a backend for Kibana®, an open source data visualization engine. Kibana will enable a remote operative to have direct access to information concerning parking lot occupancy, entry gate status and exit gate status. Furthermore, the AWS Cognito service authenticates users for access to Kibana.
After the AWS Cloud Lambda function sends the verdict (allowed/denied) to the second Lambda function running on the MACCHIATObin board, this MACCHIATObin Lambda function will be responsible for communicating with the gate controller, which is comprised of a Marvell ESPRESSObin® board, and is used to open/close the gateway as required.
The ESPRESSObin board runs as an AWS Greengrass IoT device that will be responsible for opening the gate according to the information received from the MACCHIATObin board’s second Lambda function.
This demo showcases the capabilities to run a machine learning algorithm using AWS Lambda at the edge to make the identification process extremely fast. This is possible through the high performance, low-power Marvell OCTEON TX and ARMADA multi-core processors. Marvell infrastructure processors’ capabilities have the potential to cover a range of higher-end networking and security applications that can benefit from the maturity of the Arm® ecosystem and the ability to run machine learning in a multi-core environment at the edge of the network.
Those visiting the Arm Infrastructure Pavilion (Booth# 216) at Arm TechCon (San Jose Convention Center, October 16th-18th) will be able to see the Marvell Edge Computing demo powered by AWS Greengrass.
For information on how to enable AWS Greengrass on Marvell MACCHIATObin and Marvell ESPRESSObin community boards, please visit http://wiki.macchiatobin.net/tiki-index.php?page=AWS+Greengrass+on+MACCHIATObin and http://wiki.espressobin.net/tiki-index.php?page=AWS+Greengrass+on+ESPRESSObin.
By Todd Owens, Field Marketing Director, Marvell
Marvell’s acquisition of Cavium closed on July 6th, 2018 and the integration is well under way. Cavium becomes a wholly-owned subsidiary of Marvell. Our combined mission as Marvell is to develop and deliver semiconductor solutions that process, move, store and secure the world’s data faster and more reliably than anyone else. The combination of the two companies makes for an infrastructure powerhouse, serving a variety of customers in the Cloud/Data Center, Enterprise/Campus, Service Providers, SMB/SOHO, Industrial and Automotive industries.
For our business with HPE, the first thing you need to know is it is business as usual. The folks you engaged with on I/O and processor technology we provided to HPE before the acquisition are the same you engage with now. Marvell is a leading provider of storage technologies, including ultra-fast read channels, high performance processors and transceivers that are found in the vast majority of hard disk drive (HDD) and solid-state drive (SDD) modules used in HPE ProLiant and HPE Storage products today.
Our industry leading QLogic® 8/16/32Gb Fibre Channel and FastLinQ® 10/20/25/50Gb Ethernet I/O technology will continue to provide connectivity for HPE Server and Storage solutions. The focus for these products will continue to be the intelligent I/O of choice for HPE, with the performance, flexibility, and reliability we are known for.
Marvell’s Portfolio of FastLinQ Ethernet and QLogic Fibre Channel I/O Adapters
We will continue to provide ThunderX2® Arm® processor technology for HPC servers like the HPE Apollo 70 for high-performance compute applications. We will also continue to provide Ethernet networking technology that is embedded into HPE Servers and Storage today and Marvell ASIC technology used for the iLO5 baseboard management controller (BMC) in all HPE ProLiant and HPE Synergy Gen10 servers.
iLO 5 for HPE ProLiant Gen10 is deployed on Marvell SoCs
That sounds great, but what’s going to change over time?
The combined company now has a much broader portfolio of technology to help HPE deliver best-in-class solutions at the edge, in the network and in the data center.
Marvell has industry-leading switching technology from 1GbE to 100GbE and beyond. This enables us to deliver connectivity from the IoT edge, to the data center and the cloud. Our Intelligent NIC technology provides compression, encryption and more to enable customers to analyze network traffic faster and more intelligently than ever before. Our security solutions and enhanced SoC and Processor capabilities will help our HPE design-in team collaborate with HPE to innovate next-generation server and storage solutions.
Down the road, you’ll see a shift in our branding and where you access info over time as well. While our product-specific brands, like ThunderX2 for Arm, or QLogic for Fibre Channel and FastLinQ for Ethernet will remain, many things will transition from Cavium to Marvell. Our web-based resources will start to change as will our email addresses. For example, you can now access our HPE Microsite at www.marvell.com/hpe . Soon, you’ll be able to contact us at “hpesolutions@marvell.com” as well. The collateral you leverage today will be updated over time. In fact, this has already started with updates to our HPE-specific Line Card, our HPE Ethernet Quick Reference Guide, our Fibre Channel Quick Reference Guides and our presentation materials. Updates will continue over the next few months.
In summary, we are bigger and better. We are one team that is more focused than ever to help HPE, their partners and customers thrive with world-class technology we can bring to bear. If you want to learn more, engage with us today. Our field contact details are here. We are all excited for this new beginning to make “I/O and Infrastructure Matter!” each and every day.
By Marvell PR Team
Shared storage performance has significant impact on overall system performance. That’s why system administrators try to understand its performance and plan accordingly. Shared storage subsystems have three components: storage system software (host), storage network (switches and HBAs) and the storage array.
Storage performance can be measured at all three levels and aggregated to get to the subsystem performance. This can get quite complicated. Fortunately, storage performance can effectively be represented using two simple metrics: Input/Output operations per Second (IOPS) and Latency. Knowing these two values for a target workload, a user can optimize the performance of a storage system.
Let’s understand what these key factors are and how to use them to optimize of storage performance.
What is IOPS?
IOPS is a standard unit of measurement for the maximum number of reads and writes to a storage device for a given unit of time (e.g. seconds). IOPS represent the number of transactions that can be performed and not bytes of data. In order to calculate throughput, one would have to multiply the IOPS number by the block size used in the IO. IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix.
What is a Latency?
Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system. IOPS is a neutral measure of performance and can be used in a benchmark where two systems are compared using same block sizes and read/write mix.
What is a Latency?
Latency is the total time for completing a requested operation and the requestor receiving a response. Latency includes the time spent in all subsystems, and is a good indicator of congestion in the system.
Find more about Marvell’s QLogic Fibre Channel adapter technology at:
https://www.marvell.com/fibre-channel-adapters-and-controllers/qlogic-fibre-channel-adapters/
By Sree Durbha, Head of Smart-Connected Business, Marvell
The consumer drone market has expanded greatly over the last few years, with almost 3 million units shipped during 2017. This upward trend is likely to continue. Analyst firm Statista forecasts that the commercial drone business will be worth $6.4 billion annually by 2020, while Global Market Insights has predicted that the worldwide drone market will grow to $17 billion (with the consumer category accounting for $9 billion of that). As new products are continually being introduced into what is already an acutely overcrowded marketplace, a differentiated offering is therefore critical to a successful product.
One of the newest and most exciting entrants into this crowded drone market, Tello, features functionality that sets it apart from rival offerings. Tello is manufactured by Shenzhen-based start-up Ryze Tech, a subsidiary of well-known brand DJI, which is the world’s largest producer of drones and unmanned aerial vehicles (UAVs). With a 13 minute runtime, plus a flight distance of up to 100 meters, this is an extremely maneuverable and compact quadcopter drone. It weighs just 80 grams and can fit into the palm of a typical teenager’s hand (with dimensions of 98 x 92.5 x 41 millimeters). The two main goals of the Tello are fun and education. To that end, a smartphone App-based control provides a fun user interface for everyone, including young people, to play with. The educational goal is met through an easy to program visual layout that allows users to write their own code using the comprehensive software development kit (SDK) included in the package. What really distinguishes Tello from other drones, however, is the breadth of its imaging capabilities - and this is where engaging with Marvell has proven pivotal.
Tello’s original drone design requirement called for livestreaming 720p MP4 format video, using its 5 Megapixel image sensor, back to the user’s smartphone or tablet even while traveling at its maximum speed of 8 meters/second. This called for interoperability testing with a broad array of smartphone and tablet models. Due to its small size, conserving battery life would be a key requirement, which meant ultra-low power consumption by Wi-Fi®. Underlying all of this was the singular requirement for a strong wireless connection to be maintained at all times. Finally, as is always the case, Wi-Fi would need to fit in the low bill of materials for the product.
Initial discussions between technical teams at Ryze and Marvell revealed a perfect match between the features offered on the Marvell® 1x1 802.11n single-band Wi-Fi system-on-chip (SoC) and the Wi-Fi requirements for the Tello drone project. This chip was already widely adopted in the market and established itself as a proven solution for various customer applications, including video transmission in IP cameras, mobile routers, IoT gateways etc. Ryze chose this chipset, banking on its reliability while transmitting high-definition video over the air, exceptional RF performance over range while offering ultra-low power operation, all at a competitive price point.
Marvell’s Wi-Fi SoC is a highly integrated, single-band (2.4GHz) IC that delivers IEEE® 802.11b/g/n operation in a single spatial stream (1 SS) configuration. It incorporates a power amplifier (PA), a low noise amplifier (LNA) and a transmit/receive switch. Quality of Service (QoS) is guaranteed through the 802.11e standard implementation. The Wi-Fi SoC’s compliance with the 802.11i security protocol, plus built-in wired equivalent privacy (WEP) algorithms, enable 128-bit encryption of transmitted data, thereby protecting the data from being intercepted by third parties. All of these hardware features are supported by Marvell’s robust Wi-Fi software, which includes a small footprint and full featured Wi-Fi firmware tied in with the hardware level features. Specific features such as infrastructure mode operation were developed to enable the functionality desired by Ryze for the Tello.
Marvell’s industry-leading Wi-Fi technology has enabled an exciting new user experience in the Tello, at a level of sophistication that previously would only have been seen in expensive, professional-grade equipment. In order to bring this professional quality experience to an entry-level drone model meant that significant power, performance and cost barriers were overcome. As we enter the 802.11ax era of Wi-Fi industry transition, expect Marvell to launch first-to-market, ever more envelope-pushing, technological advances such as uplink OFDMA.
By Ran Gu, Marketing Director of Switching Product Line, Marvell
Due to ongoing technological progression and underlying market dynamics, Gigabit Ethernet (GbE) technology with 10 Gigabit uplink speeds is starting to proliferate into the networking infrastructure across a multitude of different applications where elevated levels of connectivity are needed: SMB switch hardware, industrial switching hardware, SOHO routers, enterprise gateways and uCPEs, to name a few. The new Marvell® Link Street™ 88E6393X, which has a broad array of functionality, scalability and cost-effectiveness, provides a compelling switch IC solution with the scope to serve multiple industry sectors.
The 88E6393X switch IC incorporates both 1000BASE-T PHY and 10 Gbps fiber port capabilities, while requiring only 60% of the power budget necessitated by competing solutions. Despite its compact package, this new switch IC offers 8 triple speed (10/100/1000) Ethernet ports, plus 3 XFI/SFI ports, and has a built-in 200 MHz microprocessor. Its SFI support means that the switch can connect to a fiber module without the need to include an external PHY - thereby saving space and bill-of-materials (BoM) costs, as well as simplifying the design. It complies with the IEEE 802.1BR port extension standard and can also play a pivotal role in lowering the management overhead and keeping operational expenditures (OPEX) in check. In addition, it includes L3 routing support for IP forwarding purposes. Adherence to the latest time sensitive networking (TSN) protocols (such as 802.1AS, 802.1Qat, 802.1Qav and 802.1Qbv) enables delivery of the low latency operation mandated by industrial networks. The 256 entry ternary content-addressable memory (TCAM) allows for real-time, deep packet inspection (DPI) and policing of the data content being transported over the network (with access control and policy control lists being referenced). The denial of service (DoS) prevention mechanism is able to detect illegal packets and mitigate the security threat of DoS attacks.
The 88E6393X device, working in conjunction with a high performance ARMADA® network processing system-on-chip (SoC), can offload some of the packet processing activities so that the CPU’s bandwidth can be better focused on higher level activities. Data integrity is upheld, thanks to the quality of service (QoS) support across 8 traffic classes. In addition, the switch IC presents a scalable solution. The 10 Gbps interfaces provide non-blocking uplink to make it possible to cascade several units together, thus creating higher port count switches (16, 24, etc.).
This new product release features a combination of small footprint, lower power consumption, extensive security and inherent flexibility to bring a highly effective switch IC solution for the SMB, enterprise, industrial and uCPE space.
By Avinash Ghirnikar
By Todd Owens, Field Marketing Director, Marvell
Are you considering deploying HPE Cloudline servers in your hyper-scale environment? If you are, be aware that HPE now offers select Cavium ™ FastLinQ® 10GbE and 10/25GbE Adapters as options for HPE Cloudline CL2100, CL2200 and CL3150 Gen 10 Servers. The adapters supported on the HPE Cloudline servers are shown in table 1 below.
Table 1: Cavium FastLinQ 10GbE and 10/25GbE Adapters for HPE Cloudline Servers
As today’s hyper-scale environments grow, the Ethernet I/O needs go well beyond providing basic L2 NIC connectivity. Faster processors, increase in scale, high performance NVMe and SSD storage and the need for better performance and lower latency have started to shift some of the performance bottlenecks from servers and storage to the network itself. That means architects of these environments need to rethink connectivity options.
While HPE already have some good I/O offerings for Cloudline from other vendors, having Cavium FastLinQ adapters in the portfolio increases the I/O features and capabilities available. Advanced features like Universal RDMA, SmartAN™, DPDK, NPAR and SR-IOV from Cavium, allow architects to design more flexible and scalable hyper-scale environments.
Cavium’s advanced feature set provides offload technologies that shift the burden of managing the I/O from the O/S and CPU to the adapter itself. Some of the benefits of offloading I/O tasks include:
To deliver these benefits, customers can take advantage of some or all the advanced features in the Cavium FastLinQ Ethernet adapters for HPE Cloudline. Here’s a list of some of the technologies available in these adapters.
* Source; Demartek findings
Table 2: Advanced Features in Cavium FastLinQ Adapters for HPE Cloudline
Network Partitioning (NPAR) virtualizes the physical port into eight virtual functions on the PCIe bus. This makes a dual port adapter appear to the host O/S as if it were eight individual NICs. Furthermore, the bandwidth of each virtual function can be fine-tuned in increments of 500Mbps, providing full Quality of Service on each connection. SR-IOV is an additional virtualization offload these adapters support that moves management of VM to VM traffic from the host hypervisor to the adapter. This frees up CPU resources and reduces VM to VM latency.
Remote Direct Memory Access (RDMA) is an offload that routes I/O traffic directly from the adapter to the host memory. This bypasses the O/S kernel and can improve performance by reducing latency. The Cavium adapters support what is called Universal RDMA, which is the ability to support both RoCEv2 and iWARP protocols concurrently. This provides network administrators more flexibility and choice for low latency solutions built with HPE Cloudline servers.
SmartAN is a Cavium technology available on the 10/25GbE adapters that addresses issues related to bandwidth matching and the need for Forward Error Correction (FEC) when switching between 10Gbe and 25GbE connections. For 25GbE connections, either Reed Solomon FEC (RS-FEC) or Fire Code FEC (FC-FEC) is required to correct bit errors that occur at higher bandwidths. For the details behind SmartAN technology you can refer to the Marvell technology brief here.
Support for Data Plane Developer Kit (DPDK) offloads accelerate the processing of small packet I/O transmissions. This is especially important for applications in the Telco NFV and high-frequency trading environments.
For simplified management, Cavium provides a suite of utilities that allow for configuration and monitoring of the adapters that work across all the popular O/S environments including Microsoft Windows Server, VMware and Linux. Cavium’s unified management suite includes QCC GUI, CLI and v-Center plugins, as well as PowerShell Cmdlets for scripting configuration commands across multiple servers. Cavum’s unified management utilities can be downloaded from www.cavium.com .
Gen10 servers. Each of the Cavium adapters shown in table 1 support all of the capabilities noted above and are available in standup PCIe or OCP 2.0 form factors for use in the HPE Cloudline Gen10 Servers. One question you may have is how do these adapters compare to other offerings for Cloudline and those offered in HPE ProLiant servers? For that, we can look at the comparison chart here in table 3.
Table 3: Comparison of I/O Features by Ethernet Supplier
Given that Cloudline is targeted for hyper-scale service provider customers with large and complex networks, the Cavium FastLinQ Ethernet adapters for HPE Cloudline offer administrators much more capability and flexibility than other I/O offerings. If you are considering HPE Cloudline servers, then you should also consider Cavium FastLinQ as your I/O of choice.
By Maen Suleiman, Senior Software Product Line Manager, Marvell
Marvell ARMADA® embedded processors are part of another exciting networking solution for a crowdfunding project and are helping “power” the global open hardware and software engineering community as innovative new products are developed. CZ.NIC, an open source networking research team based in the Czech Republic, just placed its Turris MOX modular networking appliance on the Indiegogo® platform and has already obtained over $110,000 in financial backing.
MOX has a highly flexible modular arrangement. Central to this is a network processing module featuring a Marvell® ARMADA 3720 network processing system-on-chip (SoC). This powerful yet energy efficient 64-bit device includes dual Cortex®-A53 ARM® processor cores and an extensive array of high speed IOs (PCIe 2.0, 2.5 GbE, USB 3.0, etc.).
Figure 1: The MOX Solution from CZ.NIC
The MOX concept is simple to understand. Rather than having to procure a router with excessive features and resources that all add to the cost but actually prove to be superfluous, users can just buy a single MOX that can subsequently be extended into whatever form of network appliance a user needs. Attachment of additional modules means that specific functionality can be provided to meet exact user expectations. There is an Ethernet module that adds 4 GbE ports, a fiber module that adds fiber optic SFP connectivity, and an extension module that adds a mini PCIe connection. At a later stage, if requirements change, it is possible for that same MOX to be repurposed into a completely different appliance by adding appropriate modules.
Figure 2: The MOX Add-On Modules - Base, Extension, Ethernet and SFP
The MOX units run on Turris OS, an open source operating system built on top of the extremely popular OpenWrt® embedded Linux® distribution (as supported by Marvell’s ARMADA processors). This gives the appliance a great deal of flexibility, allowing it to execute a wide variety of different networking functions that enable it to operate as an email server, web server, firewall, etc. Additional MOX modules are already under development and will be available soon.
This project follows on CZ.NIC’s previous crowdfunding campaign using Marvell’s ARMADA SoC processing capabilities for the Turris Omnia high performance open source router - which gained huge public interest and raised 9x its original investment target. Turris MOX underlines the validity of the open source software ecosystem that has been built up around the ARMADA SoC to help customers bring their ideas to life.
Click here to learn more on this truly unique Indiegogo campaign.
By Todd Owens, Field Marketing Director, Marvell
By Tal Mizrahi, Feature Definition Architect, Marvell
There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate.
The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks.
A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context.
At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.
WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks
By Todd Owens, Field Marketing Director, Marvell
At Cavium, we provide adapters that support a variety of protocols for connecting servers to shared storage including iSCSI, Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC). One of the questions we get quite often is which protocol is best for connecting servers to shared storage? The answer is, it depends.
We can simplify the answer by eliminating FCoE, as it has proven to be a great protocol for converging the edge of the network (server to top-of-rack switch), but not really effective for multi-hop connectivity, taking servers through a network to shared storage targets. That leaves us with iSCSI and FC.
Typically, people equate iSCSI with lower cost and ease of deployment because it works on the same kind of Ethernet network that servers and clients are already running on. These same folks equate FC as expensive and complex, requiring special adapters, switches and a “SAN Administrator” to make it all work.
This may have been the case in the early days of shared storage, but things have changed as the intelligence and performance of the storage network environment has evolved. What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability. For this blog, I’m going to focus on these three criteria and compare 10Gb Ethernet (10GbE) with iSCSI hardware offload and 16Gb Fibre Channel (16GFC).
Before we crunch numbers, let me start by saying that shared storage requires a dedicated network, regardless of the protocol. The idea that iSCSI can be run on the same network as the server and client network traffic may be feasible for small or medium environments with just a couple of servers, but for any environment with mission-critical applications or with say four or more servers connecting to a shared storage device, a dedicated storage network is strongly advised to increase reliability and eliminate performance problems related to network issues.
Now that we have that out of the way, let’s start by looking at the cost difference between iSCSI and FC. We have to take into account the costs of the adapters, optics/cables and switch infrastructure. Here’s the list of Hewlett Packard Enterprise (HPE) components I will use in the analysis. All prices are based on published HPE list prices.
Notes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port
So if we do the math, the cost per port looks like this:
10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471
10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006
16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578
So iSCSI is the lowest price if DAC cables are used. Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration. When using fiber optic cables, the 16GFC configuration is lower cost.
So what are the trade-offs with DAC versus SFP+ options? It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective. Also, DAC cables are bulky, and when trying to cable more than 10 ports or more, the cable bundles can become unwieldy to deal with.
On the performance side, let’s look at the differences. iSCSI adapters have impressive specifications of 10Gbps bandwidth and 1.5Million IOPS which offers very good performance. For FC, we have 16Gbps of bandwidth and 1.3Million IOPS. So FC has more bandwidth and iSCSI can deliver slightly more transactions. Well, that is, if you take the specifications at face value. If you dig a little deeper here’s some things we learn:
Figure 1: Cavium’s iSCSI Hardware Offload IOPS Performance
Figure 2: Cavium’s QLogic 16Gb FC IOPS performance
If we look at manageability, this is where things have probably changed the most. Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance. The same management approach applies to the iSCSI network and can be done by the same network administrators.
On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console.
Another way to look at manageability is in the number of administrators on staff. In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices. The point is it doesn’t take an army to manage a SAN with today’s FC management software from vendors like Brocade.
So what is the right answer between FC and iSCSI? Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into? And if cost is the key criteria, iSCSI with DAC or 10GBASE-T connection is a good choice, understanding the tradeoff in latency and bandwidth performance.
So in very general terms, FC is the best choice for enterprise customers who need high performance, mission-critical capability, high reliability and scalable shared storage connectivity. For smaller customers who are more cost sensitive, iSCSI is a great alternative. iSCSI is also a good protocol for pre-configure systems like hyper-converged storage solutions to provide simple connectivity to existing infrastructure.
As a wise manager once told me many years ago, “If you start with the customer and work backwards, you can’t go wrong.” So the real answer is understand what the customer needs and design the best solution to meet those needs based on the information above.
By Maen Suleiman, Senior Software Product Line Manager, Marvel
As more workloads are moving to the edge of the network, Marvell continues to advance technology that will enable the communication industry to benefit from the huge potential that network function virtualization (NFV) holds. At this year’s Mobile World Congress (Barcelona, 26th Feb to 1st Mar 2018), Marvell, along with some of its key technology collaborators, will be demonstrating a universal CPE (uCPE) solution that will enable telecom operators, service providers and enterprises to deploy needed virtual network functions (VNFs) to support their customers’ demands.
The ARMADA® 8040 uCPE solution, one of several ARMADA edge computing solutions to be introduced to the market, will be located at the Arm booth (Hall 6, Stand 6E30) and will run Telco Systems NFVTime uCPE operating system (OS) with two deployed off-the-shelf VNFs provided by 6WIND and Trend Micro, respectively, that enable virtual routing and security functionalities. The CyberTAN white box solution is designed to bring significant improvements in both cost effectiveness and system power efficiency compared to traditional offerings while also maintaining the highest degrees of security.
CyberTAN white box solution incorporating Marvell ARMADA 8040 SoC The CyberTAN white box platform is comprised of several key Marvell technologies that bring an integrated solution designed to enable significant hardware cost savings. The platform incorporates the power-efficient Marvell® ARMADA 8040 system-on-chip (SoC) based on the Arm Cortex®-A72 quad-core processor, with up to 2GHz CPU clock speed, and Marvell E6390x Link Street® Ethernet switch on-board. The Marvell Ethernet switch supports 10G uplink and 8 x 1GbE ports along with integrated PHYs, four of which are auto-media GbE ports (combo ports).
The CyberTAN white box benefits from the Marvell ARMADA 8040 processor’s rich feature set and robust software ecosystem, including:
In addition, the uCPE platform supports Mini PCI Express (mPCIe) expansion slots that can enable Marvell advanced 11ac/11ax Wi-Fi or additional wired/wireless connectivity, up to 16GB DDR4 DIMM, 2 x M.2 SATA, one SATA and eMMC options for storage, SD and USB expansion slots for additional storage or other wired/wireless connectivity such as LTE.
At the Arm booth, Telco Systems will demonstrate its NFVTime uCPE operating system on the CyberTAN white box, with zero-touch provisioning (ZTP) feature. NFVTime is an intuitive NFVi-OS that facilitates the entire process of deploying VNFs onto the uCPE, and avoids the complex and frustrating management and orchestration activities normally associated with putting NFV-based services into action. The demonstration will include two main VNFs:
Please contact your Marvell sales representative to arrange a meeting at Mobile World Congress or drop by the Arm booth (Hall 6, Stand 6E30) during the conference to see the uCPE solution in action.
By Todd Owens, Field Marketing Director, Marvell
Like a kid in a candy store, choose I/O wisely.
Remember as a child, a quick stop to the convenience store, standing in front of the candy aisle your parents saying, “hurry and pick one.” But with so many choices, the decision was often confusing. With time running out, you’d usually just grab the name-brand candy you were familiar with. But what were you missing out on? Perhaps now you realize there were more delicious or healthy offerings you could have chosen.
I use this as an analogy to discuss the choice of I/O technology for use in server configurations. There are lots of choices and it takes time to understand all the differences. As a result, system architects in many cases just fall back to the legacy name-brand adapter they have become familiar with. Is this the best option for their client though? Not always. Here’s some reasons why.
Some of today’s Ethernet adapters provide added capabilities that I refer to as “Intelligent I/O”. These adapters utilize a variety of offload technology and other capabilities to take on tasks associated with I/O processing that are typically done in software by the CPU when using a basic “standard” Ethernet adapter. Intelligent offloads include things like SR-IOV, RDMA, iSCSI, FCoE or DPDK. Each of these offloads the work to the adapter and, in many cases, bypasses the O/S kernel, speeding up I/O transactions and increasing performance.
As servers become more powerful and get packed with more virtual machines, running more applications, CPU utilizations of 70-80% are now commonplace. By using adapters with intelligent offloads, CPU utilization for I/O transactions can be reduced significantly, giving server administrators more CPU headroom. This means more CPU resources for applications or to increase the VM density per server.
Another reason is to mitigate performance impact to the Spectre and Meltdown fixes required now for X86 server processors. The side channel vulnerability known as Spectre and Meltdown in X86 processors required kernel patches from the CPU vendor. These patches can have a significantly reduce CPU performance. For example, Red Hat reported the impact could be as much as a 19% performance degradation. That’s a big performance hit.
Storage offloads and offloads like SR-IOV, RDMA and DPDK all bypass the O/S kernel. Because they bypass the kernel, the performance impacts of the Spectre and Meltdown fixes are bypassed as well. This means I/O transactions with intelligent I/O adapters are not impacted by these fixes, and I/O performance is maximized.
Finally, intelligent I/O can play a role in reducing cost and complexity and optimizing performance in virtual server environments. Some intelligent I/O adapters have port virtualization capabilities. Cavium Fibre Channel HBAs implement N-port ID Virtualization, or NPIV, to allow a single Fibre Channel port appear as multiple virtual Fibre Channel adapters to the hypervisor. For Cavium FastLinQ Ethernet Adapters, Network Partitioning, or NPAR, is utilized to provide similar capability for Ethernet connections. Up to eight independent connections can be presented to the host O/S making a single dual-port adapter look like 16 NICs to the operating system. Each virtual connection can be set to specific bandwidth and priority settings, providing full quality of service per connection.
The advantage of this port virtualization capability is two-fold. First, the number of cables and connections to a server can be reduced. In the case of storage, four 8Gb Fibre Channel connections can be replaced by a single 32Gb Fibre Channel connection. For Ethernet, eight 1GbE connections can easily be replaced by a single 10GbE connection and two 10GbE connections can be replaced with a single 25GbE connection, with 20% additional bandwidth to spare.
At HPE, there are more than fifty 10Gb-100GbE Ethernet adapters to choose from across the HPE ProLiant, Apollo, BladeSystem and HPE Synergy server portfolios. That’s a lot of documentation to read and compare. Cavium is proud to be a supplier of eighteen of these Ethernet adapters, and we’ve created a handy quick reference guide to highlight which of these offloads and virtualization features are supported on which adapters. View the complete Cavium HPE Ethernet Adapter Quick Reference guide here.
For Fibre Channel HBAs, there are fewer choices (only nineteen), but we make a quick reference document available for our HBA offerings at HPE as well. You can view the Fibre Channel HBA Quick Reference here.
In summary, when configuring HPE servers, think twice before selecting your I/O device. Choose an Intelligent I/O Adapter like those from HPE and Cavium. Cavium provides the broadest portfolio of intelligent Ethernet and Fibre Channel adapters for HPE Gen9 and Gen10 Servers and they support most, if not all, of the features mentioned in this blog. The best news is that the HPE/Cavium adapters are offered at the same or lower price than other products with fewer features. That means with HPE and Cavium I/O, you get more for less, and it just works too!
By Maen Suleiman, Senior Software Product Line Manager, Marvell
Marvell’s ground-breaking ARMADA® 38x processor series continues to see momentum in integration into new network and security designs. Most recently, the ARMADA 385 processor has been incorporated into Netgate’s new SG-3100 product offering.
Netgate’s objective with the SG-3100 was to bring to market an entry-level secure gateway solution that offered substantially more horsepower than competing products in the same price range. The target criteria for the new design were:
Marvell’s engineering team was pleased to collaborate with Netgate on this ambitious project.
Figure 1: Netgate SG-3100 powered by Marvell ARMADA 385
Processor The SG-3100 exhibits a high degree of flexibility and can be employed as a security firewall, LAN, router or WAN router, or VPN solution. It can also act as a DHCP server or DNS server, as well as providing intrusion detection system (IDS) and intrusion prevention system (IPS) capabilities. This extremely configurable unit comes equipped with 8GB eMMC Flash data storage or two m.2 SATA-based solid-state drives (SSDs), and also supports the LTE standard. Thanks to its Marvell® 88E6141 4-port switched LAN interface, the compact, cost-effective product easily facilitates bridging multiple wired and wireless networks.
Several factors drove Netgate’s decision to use Marvell’s ARMADA 385, starting with the ARMADA 38x ecosystem, which includes the ARMADA 38x ClearFog community board from SolidRun, and the ARMADA 38x FreeBSD port developed by Semihalf. Additionally, an increasing number of pfSense users had requested access to a board that provided three Ethernet ports, especially for dual-WAN operation. The ARMADA 385’s extensive embedded connectivity satisfies this need.
Based on the Arm® Cortex®-A9 topology, the ARMADA 385 system-on-chip (SoC) at the heart of the SG-3100 provides highly effective, dual-core processing capabilities. The SoC has a total of three Ethernet ports - two that support 1 Gbps data rates and a third capable of supporting either 2.5 Gbps or 1 Gbps. In the SG-3100 design, the ARMADA 385 is accompanied by Marvell’s 88E6141 multi-port Ethernet switch, which also supports 2.5Gbps operation through one of its ports.
The Netgate SG-3100 runs at 1.6GHz and is ideal for small offices and domestic environments. And thanks to the constituent IC technology, this solution packs serious throughput at a very compelling price.
By Kelvin Vivian, Director of Intellectual Property, Marvell
The term ‘innovation’ is frequently used in business today. For many, the term means providing ideas out of the blue, which lead to mind blowing discoveries and achievements.
While this might be the perceived outcome of innovation, the reality is that true innovation can arise in a variety of different forms and can have any sizeable impact, and a healthy dose of creativity and idea sharing must be encouraged if businesses are to effectively harness the innovative potential of its employees.
At Marvell, we pride ourselves on working together collaboratively and creatively and this enables employees to be the most innovative versions of themselves, and such is what largely contributes to our sixth consecutive year of inclusion in the Clarivate Analytics Top 100 Global Innovators list.
Placement on the list has become the standard measure for innovation across the world and is recognized as a significant achievement. The award itself is based in part on global reach — we hold more than 9,000 patents worldwide — grant success rates and influence of patented technology, and it serves as a testament to Marvell’s culture of innovation and commitment to providing differentiated, breakthrough technology solutions.
While inclusion on this list provides a celebratory point of reflection for all of us at Marvell, it also is a reminder of the work that lays ahead of us and our colleagues across industry who, while competing, also share a common passion and goal, which simply put – is to make technology that makes life better. And in today’s market especially, it’s more important than ever that we and our partners continue to push the boundaries of innovation at every turn. As the physicist Albert Einstein said, “You can’t solve a problem on the same level that it was created. You have to rise above it to the next level.”
So while we extend our congratulations to colleagues and competitors alike, without whom there would be no yardstick to measure ourselves by and no goal to aim for; we can’t wait to see what new innovations and types of critical and creative thinking this year will bring.
See you all on the other side!
By Maen Suleiman, Senior Software Product Line Manager, Marvell
Following the success of the MACCHIATObin® development platform, which was released back in the spring, Marvell and technology partner SolidRun have now announced the next stage in the progression of this hardware offering. After drawing on the customer feedback received, a series of enhancements to the original concept have subsequently been made, so that these mini-ITX boards are much more optimized for meeting the requirements of engineers.
Marvell and SolidRun announce the availability of two new MACCHIATObin products that will supersede the previous release. They are the MACCHIATObin Single Shot and the MACCHIATObin Double Shot boards.
As before, these mini-ITX format networking community boards both feature the powerful processing capabilities of Marvell’s ARMADA® 8040 system-on-chip (SoC) and stay true to the original objective of bringing an affordable Arm-based development resource with elevated performance to the market. However, now engineers have a choice in terms of how much supporting functionality comes with it - thus making the platform even more attractive and helping to reach a much wider audience.
Figure 1: MACCHIATObin Single Shot (left) and MACCHIATObin Double Shot (right)
The more streamlined MACCHIATObin Single Shot option presents an entry level board that should appeal to engineers with budgetary constraints. This has a much lower price tag than the original board, coming in at just $199. It comes with two 10G SFP+ connectors without the option of two 10G copper connectors, and also doesn’t come with default DDR4 DIMM as its predecessor, but still has a robust 1.6GHz processing speed.
This is complemented by the higher performance MACCHIATObin Double Shot. This unleashes the full 2GHz of processing capacity that can be derived from the ARMADA 8040, which relies on a 64-bit quad-core Arm Cortex-A72 processor core. 4GB of DDR4 DIMM is included. At only $399 it represents great value for money - costing only slightly more than the original, but with extra features and stronger operational capabilities being delivered. It comes with additional accessories that are not in the Single Shot package - including a power cable and a microUSB-to-USB cable.
Both the Single Shot and Double Shot versions incorporate heatsink and fan mechanisms in order to ensure that better reliability is maintained through more effective thermal management. The fan has an airflow of 6.7 cubic feet per minute (CFM) with low noise operation. A number of layout changes have been implemented upon the original design to better utilize the available space and to make the board more convenient for those using it. For example, the SD card slot has been moved to make it more accessible and likewise the SATA connectors are now better positioned, allowing easier connection of multiple cables. The micro USB socket has also been relocated to aid engineers.
A 3-pin UART header has been added to the console UART (working in parallel with FTDI USB-to-UART interface IC). This means that developers now have an additional connectivity option that they can utilize, making the MACCHIATObin community board more suitable for deployment in remote locations or where it needs to interface with legacy equipment (that do not have a USB port). The DIP switches have been replaced with jumpers, which again gives the boards greater versatility. The JTAG connector is not assembled by default, the PCI Express (PCIe) x4 slot has been replaced with an open PCIx4 slot so that it can accommodate a wider variety of different board options (like x8 and x16, as well as x4 PCIe) such as graphics processor cards, etc. to be connected. Furthermore, the fixed LED emitter has been replaced by one that is general purpose input/output (GPIO) controlled, thereby enabling operational activity to be indicated.
The fact that these units have the same form factor as the original, means that they offer a like-for-like replacement for the previous model of the MACCHIATObin board. Therefore existing designs that are already using this board can be upgraded to the higher performance MACCHIATObin Double Shot version or conversely scaled down to the MACCHIATObin Single Shot in order to reduce the associated costs.
Together the MACCHIATObin Double Shot and Single Shot boards show that the team at Marvell are always listening to our customer base and responding to their needs. Learning from the first MACCHIATObin release, we have been able to make significant refinements, and consequently develop two new very distinct product offerings. One that addresses engineers that are working to a tight budget, for which the previous board would not have been viable, and the other for engineers that want to boost performance levels.
By Marvell PR Team
Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead.
Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too.
So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable - with cost and performance being important factors.
There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.
There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable.
Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.
Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies.
The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe. Check out www.marvell.com to learn more about how Marvell is storing the world’s data.
By By Christopher Mash, Senior Director of Automotive Applications & Architecture, Marvell
The in-vehicle networks currently used in automobiles are based on a combination of several different data networking protocols, some of which have been in place for decades. There is the controller area network (CAN), which takes care of the powertrain and related functions; the local interconnect network (LIN), which is predominantly used for passenger/driver comfort purposes that are not time sensitive (such as climate control, ambient lighting, seat adjustment, etc.); the media oriented system transport (MOST), developed for infotainment; and FlexRay™ for anti-lock braking (ABS), electronic power steering (EPS) and vehicle stability functions.
As a result of using different protocols, gateways are needed to transfer data within the infrastructure. The resulting complexity is costly for car manufacturers. It also affects vehicle fuel economy, since the wire harnessing needed for each respective network adds extra weight to the vehicle. The wire harness represents the third heaviest element of the vehicle (after the engine and chassis) and the third most expensive, too. Furthermore, these gateways have latency issues, something that will impact safety-critical applications where rapid response is required.
The number of electronic control units (ECUs) incorporated into cars is continuously increasing, with luxury models now often having 150 or more ECUs, and even standard models are now approaching 80-90 ECUs. At the same time, data intensive applications are emerging to support advanced driver assistance system (ADAS) implementation, as we move toward greater levels of vehicle autonomy. All this is causing a significant ramp in data rates and overall bandwidth, with the increasing deployment of HD cameras and LiDAR technology on the horizon.
As a consequence, the entire approach in which in-vehicle networking is deployed needs to fundamentally change, first in terms of the topology used and, second, with regard to the underlying technology on which it relies.
Currently, the networking infrastructure found inside a car is a domain-based architecture. There are different domains for each key function - one for body control, one for infotainment, one for telematics, one for powertrain, and so on. Often these domains employ a mix of different network protocols (e.g., with CAN, LIN and others being involved).
As network complexity increases, it is now becoming clear to automotive engineers that this domain-based approach is becoming less and less efficient. Consequently, in the coming years, there will need to be a migration away from the current domain-based architecture to a zonal one.
A zonal arrangement means data from different traditional domains is connected to the same ECU, based on the location (zone) of that ECU in the vehicle. This arrangement will greatly reduce the wire harnessing required, thereby lowering weight and cost - which in turn will translate into better fuel efficiency. Ethernet technology will be pivotal in moving to zonal-based, in-vehicle networks.
In addition to the high data rates that Ethernet technology can support, Ethernet adheres to the universally-recognized OSI communication model. Ethernet is a stable, long-established and well-understood technology that has already seen widespread deployment in the data communication and industrial automation sectors. Unlike other in-vehicle networking protocols, Ethernet has a well-defined development roadmap that is targeting additional speed grades, whereas protocols – like CAN, LIN and others – are already reaching a stage where applications are starting to exceed their capabilities, with no clear upgrade path to alleviate the problem.
Future expectations are that Ethernet will form the foundation upon which all data transfer around the car will occur, providing a common protocol stack that reduces the need for gateways between different protocols (along with the hardware costs and the accompanying software overhead). The result will be a single homogeneous network throughout the vehicle in which all the protocols and data formats are consistent. It will mean that the in-vehicle network will be scalable, allowing functions that require higher speeds (10G for example) and ultra-low latency to be attended to, while also addressing the needs of lower speed functions. Ethernet PHYs will be selected according to the particular application and bandwidth demands - whether it is a 1Gbps device for transporting imaging sensing data, or one for 10Mbps operation, as required for the new class of low data rate sensors that will be used in autonomous driving.
Each Ethernet switch in a zonal architecture will be able to carry data for all the different domain activities. All the different data domains would be connected to local switches and the Ethernet backbone would then aggregate the data, resulting in a more effective use of the available resources and allowing different speeds to be supported, as required, while using the same core protocols. This homogenous network will provide ‘any data, anywhere’ in the car, supporting new applications through combining data from different domains available through the network.
Marvell is leading the way when it comes to the progression of Ethernet-based, in-vehicle networking and zonal architectures by launching, back in the summer of 2017, the AEC-Q100-compliant 88Q5050 secure Gigabit Ethernet switch for use in automobiles. This device not only deals with OSI Layers 1-2 (the physical layer and data layer) functions associated with standard Ethernet implementations, it also has functions located at OSI Layers 3,4 and beyond (the network layer, transport layer and higher), such as deep packet inspection (DPI). This, in combination with Trusted Boot functionality, provides automotive network architects with key features vital in ensuring network security.
By Maen Suleiman, Senior Software Product Line Manager, Marvell
The adoption of multi-gigabit networks and planned roll-out of next generation 5G networks will continue to create greater available network bandwidth as more and more computing and storage services get funneled to the cloud. Increasingly, applications running on IoT and mobile devices connected to the network are becoming more intelligent and compute-intensive. However, with so many resources being channeled to the cloud, there is strain on today’s networks.
Instead of following a conventional cloud centralized model, next generation architecture will require a much greater proportion of its intelligence to be distributed throughout the network infrastructure. High performance computing hardware (accompanied by the relevant software), will need to be located at the edge of the network. A distributed model of operation should provide the needed compute and security functionality required for edge devices, enable compelling real-time services and overcome inherent latency issues for applications like automotive, virtual reality and industrial computing. With these applications, analytics of high resolution video and audio content is also needed.
Through use of its high performance ARMADA® embedded processors, Marvell is able to demonstrate a highly effective solution that will facilitate edge computing implementation on the Marvell MACCHIATObin™ community board using the ARMADA 8040 system on chip (SoC). At CES® 2018, Marvell and Pixeom teams will be demonstrating a fully effective, but not costly, edge computing system using the Marvell MACCHIATObin community board in conjunction with the Pixeom Edge Platform to extend functionality of Google Cloud Platform™ services at the edge of the network. The Marvell MACCHIATObin community board will run Pixeom Edge Platform software that is able to extend the cloud capabilities by orchestrating and running Docker container-based micro-services on the Marvell MACCHIATObin community board.
Currently, the transmission of data-heavy, high resolution video content to the cloud for analysis purposes places a lot of strain on network infrastructure, proving to be both resource-intensive and also expensive. Using Marvell’s MACCHIATObin hardware as a basis, Pixeom will demonstrate its container-based edge computing solution which provides video analytics capabilities at the network edge. This unique combination of hardware and software provides a highly optimized and straightforward way to enable more processing and storage resources to be situated at the edge of the network. The technology can significantly increase operational efficiency levels and reduce latency.
The Marvell and Pixeom demonstration deploys Google TensorFlow™ micro-services at the network edge to enable a variety of different key functions, including object detection, facial recognition, text reading (for name badges, license plates, etc.) and intelligent notifications (for security/safety alerts). This technology encompasses the full scope of potential applications, covering everything from video surveillance and autonomous vehicles, right through to smart retail and artificial intelligence. Pixeom offers a complete edge computing solution, enabling cloud service providers to package, deploy, and orchestrate containerized applications at scale, running on premise “Edge IoT Cores.” To accelerate development, Cores come with built-in machine learning, FaaS, data processing, messaging, API management, analytics, offloading capabilities to Google Cloud, and more. The MACCHIATObin community board is using Marvell’s ARMADA 8040 processor and has a 64-bit ARMv8 quad-core processor core (running at up to 2.0GHZ), and supports up to 16GB of DDR4 memory and a wide array of different I/Os. Through use of Linux® on the Marvell MACCHIATObin board, the multifaceted Pixeom Edge IoT platform can facilitate implementation of edge computing servers (or cloudlets) at the periphery of the cloud network. Marvell will be able to show the power of this popular hardware platform to run advanced machine learning, data processing, and IoT functions as part of Pixeom’s demo. The role-based access features of the Pixeom Edge IoT platform also mean that developers situated in different locations can collaborate with one another in order to create compelling edge computing implementations. Pixeom supplies all the edge computing support needed to allow Marvell embedded processors users to establish their own edge-based applications, thus offloading operations from the center of the network.
Marvell will also be demonstrating the compatibility of its technology with the Google Cloud platform, which enables the management and analysis of deployed edge computing resources at scale. Here, once again the MACCHIATObin board provides the hardware foundation needed by engineers, supplying them with all the processing, memory and connectivity required.
Those visiting Marvell’s suite at CES (Venetian, Level 3 - Murano 3304, 9th-12th January 2018, Las Vegas) will be able t