By Vienna Alexander, Marketing Content Professional, Marvell
![]()
Marvell is proud to celebrate its 10th consecutive year as the “Fittest Firm” in the Silicon Valley Turkey Trot. Since 2016, Marvell has sponsored the competition and consistently earned this distinction for having the highest employee participation among large firms.
The Silicon Valley Turkey Trot is the largest Thanksgiving Day race in the United States. Embracing the spirit of giving, the event donates all proceeds to four local non-profit organizations: Healthier Kids Foundation, HomeFirst, Second Harvest of Silicon Valley, and Second Harvest. The race has contributed more than $13 million and provided more than 10 million meals to these causes since its inception in 2005.
This year, on Thanksgiving morning, more than 700 Marvell employees and their families joined the race to support these local organizations and stay active during the holiday season.
“We’re incredibly proud to have so many employees participate in this meaningful event year after year,” said Chris Koopmans, Marvell President and Chief Operating Officer. “Marvell has supported the Silicon Valley Turkey Trot for a long time, and we’re honored to contribute to such worthwhile organizations in our community. We care deeply about promoting physical and mental well-being, and it’s inspiring to see our team come together in support of such an important cause.”
By Chander Chadha, Director of Marketing, Flash Storage Products, Marvell
AI is all about dichotomies. Distinct computing architectures and processors have been developed for training and inference workloads. In the past two years, scale-up and scale-out networks have emerged.
Soon, the same will happen in storage.
The AI infrastructure need is prompting storage companies to develop SSDs, controllers, NAND and other technologies fine-tuned to support GPUs—with an emphasis on higher IOPS (input/output operations per second) for AI inference—that will be fundamentally different from those for CPU-connected drives where latency and capacity are the bigger focus points. This drive bifurcation also likely won’t be the last; expect to also see drives optimized for training or inference.
As in other technology markets, the changes are being driven by the rapid growth of AI and the equally rapidly growing need to boost the performance, efficiency and TCO of AI infrastructure. The total amount of SSD capacity inside data centers is expected to double to approximately 2 zettabytes by 2028 with the growth primary fueled by AI.1 By that year, SSDs will account for 41% of the installed base of data center drives, up from 25% in 2023.1
Greater storage capacity, however, also potentially means more storage network complexity, latency, and storage management overhead. It also means potentially more power. In 2023, SSDs accounted for 4 terawatt hours of data center power, or around 25% of the 16 TWh consumed by storage. By 2028, SSDs are slated to account for 11TWh, or 50%, of storage’s expected total for the year.1 While storage represents less than five percent of total data power consumption, the total remains large and provides incentives for saving. Reducing storage power by even 1 TWh, or less than 10%, would save enough electricity to power 90,000 US homes for a year.2 Finding the precise balance between capacity, speed, power and cost will be critical for both AI data center operators and customers. Creating different categories of technologies becomes the first step toward optimizing products in a way that will be scalable.
By Vienna Alexander, Marketing Content Professional, Marvell

Optical connectivity is the backbone of AI servers and an expanding opportunity where Marvell shines, given its comprehensive optical connectivity portfolio.
Marvell showcased its notable developments at ECOC, the European Conference on Optical Communication, alongside various companies contributing to the hardware needed for this AI era.
Learn more about these impactful optical innovations that are enabling AI infrastructure, plus the trends and goings-on of the market.
By Vienna Alexander, Marketing Content Professional, Marvell

Marvell was announced as the top Connectivity winner in the 2025 LEAP Awards for its 1.6 Tbps LPO Optical Chipset. The judges' remarks noted that “the value case writes itself—less power, reduced complexity but substantial bandwidth increase.” Marvell earned the gold spot, reaffirming the industry-leading connectivity portfolio it is continually building.
The LEAP (Leadership in Engineering Achievement Program) Awards recognize best-in-class product and component designs across 11 categories with the feedback of an independent judging panel of experts. These awards are published by Design World, the trade magazine that covers design engineering topics in detail.
This chipset, combining a 200G/lane TIA (transimpedance amplifier) and laser drivers, enables 800G and 1.6T linear-drive pluggable optics (LPO) modules. LPO modules offer longer reach than passive copper, at low power and low latency, and are designed for scale-up compute-fabric applications.
By Chris McCormick, Product Management Director, Cloud Platform Group, Marvell
Co-packaged optics (CPO) will play a fundamental role in improving the performance, efficiency, and capabilities of networks, especially the scale-up fabrics for AI systems.
Realizing these benefits will also require a fundamental transformation in the way computing and switching assets are designed and deployed in data centers. Marvell is partnering with equipment manufacturers, cable specialists, interconnect companies and others to ensure the infrastructure for delivering CPO will be ready when customers are ready to adopt it.
The Trends Driving CPO
AI’s insatiable appetite for bandwidth and the physical limitations of copper are driving demand for CPO. Network bandwidth doubles every two to three years, and the reach of copper reduces meaningfully as bandwidth increases. Meanwhile, data center operators are clamoring for better performance per watt and rack.
CPO ameliorates the problem by moving the conversion of electrical to optical from an external slot on the faceplate to a position as close to the ASIC as possible. This shortens the copper trace, which may improve the link budget enough to remove digital signal processor (DSP) or retimer functionality, thereby reducing the overall power per bit, a key metric in AI datacenter management. Achieving commercial viability and scalability, however, has taken years of R&D across the ecosystem, and the benefits will likely depend on the use cases and applications where CPO is deployed.
While analyst firms such as LightCounting predict that optical modules will continue to constitute the majority of optical links inside data centers through the decade,1 CPO will likely become a meaningful segment.
The CPO Server Tray
The image below shows a conceptualized AI compute tray with CPO developed with products from SENKO Advanced Components and Marvell. The design contains room for four XPUs and up to 102.4 Tbps of bandwidth delivered through 1024 optical fibers, all in a 1U tray. The density and reach enabled by CPO opens the door to scale-up domains far beyond what is possible with copper alone..

When asked at recent trade shows how many fibers the tray contained, most attendees guessed around 250 fibers. The actual number is 1,152 fibers.