By Michael Kanellos, Head of Influencer Relations, Marvell
Data infrastructure needs more: more capacity, speed, efficiency, bandwidth and, ultimately, more data centers. The number of data centers owned by the top four cloud operators has grown by 73% since 20201, while total worldwide data center capacity is expected to double to 79 megawatts (MW) in the near future2.
Aquila, the industry’s first O-band coherent DSP, marks a new chapter in optical technology. O-band optics lower the power consumption and complexity of optical modules for links ranging from two to 20 kilometers. O-band modules are longer in reach than PAM4-based optical modules used inside data centers and shorter than C-band and L-band coherent modules. They provide users with an optimized solution for the growing number of data center campuses emerging to manage the expected AI data traffic.
Take a deep dive into our O-band technology with Xi Wang’s blog, O-Band Coherent, An Idea Whose Time is (Nearly) Here, originally published in March, below:
O-Band Coherent: An Idea Whose Time Is (Nearly) Here
By Xi Wang, Vice President of Product Marketing of Optical Connectivity, Marvell
Over the last 20 years, data rates for optical technology have climbed 1000x while power per bit has declined by 100x, a stunning trajectory that in many ways paved the way for the cloud, mobile Internet and streaming media.
AI represents the next inflection point in bandwidth demand. Servers powered by AI accelerators and GPUs have far greater bandwidth needs than typical cloud servers: seven high-end GPUs alone can max out a switch that ordinarily can handle 500 cloud two-processor servers. Just as important, demand for AI services, and higher-value AI services such as medical imaging or predictive maintenance, will further drive the need for more bandwidth. The AI market alone is expected to reach $407 billion by 2027.
By Michael Kanellos, Head of Influencer Relations, Marvell and Vienna Alexander, Marketing Content Intern, Marvell
Is copper dead?
Not by a long shot. Copper technology, however, will undergo a dramatic transformation over the next several years. Here’s a guide.
1. Copper is the Goldilocks Metal
Copper has been a staple ingredient for interconnects since the days of Colossus and ENIAC. It is a superior conductor, costs far less than gold or silver and offers relatively low resistance. Copper also replaced aluminum for connecting transistors inside of chips in the late 90s because its 40% lower resistance improved performance by 15%1.
Copper is also simple, reliable and hearty. Interconnects are essentially wires. By contrast, optical interconnects require a host of components such as optical DSPs, transimpedance amplifiers and lasers.
“The first rule in optical technology is ‘Whatever you can do in copper, do in copper,’” says Dr. Loi Nguyen, EVP of optical technology at Marvell.
2. But It’s Still a Metal
Nonetheless, electrical resistance exists. As bandwidth and network speeds increase, so do heat and power consumption. Additionally, increasing bandwidth reduces the reach, so doubling the data rate reduces distance by roughly 30–50% (see below).
As a result, optical technologies have replaced copper in interconnects five meters or longer in data centers and telecommunication networks.
Source: Marvell
By Michael Kanellos, Head of Influencer Relations, Marvell
With AI computing and cloud data centers requiring unprecedented levels of performance and power, Marvell is leading the way with transformative optical interconnect solutions for accelerated infrastructure to meet the rising demand for network bandwidth.
At the ECOC 2024 Exhibition Industry Awards event, Marvell received the Most Innovative Pluggable Transceiver/Co-Packaged Module Award for the Marvell® COLORZ® 800 family. Launched in 2020 for ECOC’s 25th anniversary, the ECOC Exhibition Industry Awards spotlight innovation in optical communications, transport, and photonic technologies. This recognition highlights the company’s innovations in ZR/ZR+ technology for accelerated infrastructure and demonstrates its critical role in driving cloud and AI workloads.
By Michael Kanellos, Head of Influencer Relations, Marvell
Coherent optical digital signal processors (DSPs) are the long-haul truckers of the communications world. The chips are essential ingredients in the 600+ subsea Internet cables that crisscross the oceans (see map here) and the extended geographic links weaving together telecommunications networks and clouds.
One of the most critical trends for long-distancer communications has been the shift from large, rack-scale transport equipment boxes running on embedded DSPs often from the same vendor to pluggable modules based on standardized form factors running DSPs from silicon suppliers tuned to the power limits of modules.
With the advent of 800G ZR/ZR+ modules, the market arrives at another turning point. Here’s what you need to know.
It’s the Magic of Modularity
PCs, smartphones, solar panels and other technologies that experienced rapid adoption had one thing in common: general agreement on the key ingredients. By building products around select components, accepted standards and modular form factors, an ecosystem of suppliers sprouted. And for customers that meant fewer shortages, lower prices and accelerated innovation.
The same holds true of pluggable coherent modules. 100 Gbps coherent modules based on the ZR specification debuted in 2017. The modules could deliver data approximately 80 kilometers and consumed approximately 4.5 watts per 100G of data delivered. Microsoft became an early adopter and used the modules to build a mesh of metro data centers1.
Flash forward to 2020. Power per 100G dropped to 4W and distance exploded: 120k connections became possible with modules based on the ZR standard and 400k with the ZR+ standard. (An organization called OIF maintains the ZR standard. ZR+ is controlled by OpenROADM. Module makers often make both varieties. The main difference between the two is the amplifier: the DSPs, number of channels and form factors are the same.) ®
The market responded. 400ZR/ZR+ became adopted more rapidly than any other technology in optical history, according to Cignal AI principal analyst Scott Wilkinson.
“It opened the floodgates to what you could do with coherent technology if you put it in the right form factor,” he said during a recent webinar.
By Annie Liao, Product Management Director, Connectivity, Marvell
PCIe has historically been used as protocol for communication between CPU and computer subsystems. It has gradually increased speed since its debut in 2003 (PCI Express) and after 20 years of PCIe development, we are currently at PCIe Gen 5 with I/O bandwidth of 32Gbps per lane. There are many factors driving the PCIe speed increase. The most prominent ones are artificial intelligence (AI) and machine learning (ML). In order for CPU and AI Accelerators/GPUs to effectively work with each other for larger training models, the communication bandwidth of the PCIe-based interconnects between them needs to scale to keep up with the exponentially increasing size of parameters and data sets used in AI models. As the number of PCIe lanes supported increases with each generation, the physical constraints of the package beachfront and PCB routing put a limit to the maximum number of lanes in a system. This leaves I/O speed increase as the only way to push more data transactions per second. The compute interconnect bandwidth demand fueled by AI and ML is driving a faster transition to the next generation of PCIe, which is PCIe Gen 6.
PCIe has been using 2-level Non-Return-to-Zero (NRZ) modulation since its inception. Increasing PCIe speed up to Gen 5 has been achieved through doubling of the I/O speed. For Gen 6, PCI-SIG decided to adopt Pulse-Amplitude Modulation 4 (PAM4), which carries 4-level signal encoding 2 bits of data (00, 01, 10, 11). The reduced margin resulting from the transition of 2-level signaling to 4-level signaling has also necessitated the use of Forward Error Correction (FEC) protection, a first for PCIe links. With the adoptions of PAM4 signaling and FEC, Gen 6 marks an inflection point for PCIe both from signaling and protocol layer perspectives.
In addition to AI/ML, disaggregation of memory and storage is an emerging trend in compute applications that has a significant impact in the applications of PCIe based interconnect. PCIe has historically been adopted on-board and for in-chassis interconnects. Attaching more front-facing NVMe SSDs is one of the common PCIe interconnect examples. With the increasing trends toward flexible resource allocation, and the advancement of CXL technology, the server industry is now moving toward disaggregated and composable infrastructure. In this disaggregated architecture, the PCIe end points are located at different chassis away from the PCIe root complex, requiring the PCIe link to travel out of the system chassis. This is typically achieved through direct attach cables (DAC) that can range up to 3-5m.