The data requirements of modern society are escalating at a relentless pace with new paradigms changing the way data is processed. The rapidly rising volume of data that is now being uploaded and downloaded from the cloud (such as HD video or equally data-intensive immersive gaming content) is putting incredible strain onto existing network infrastructure - testing both the bandwidth and data density speeds that are supported.
The onset of augmented reality (AR) and virtual reality (VR) will require access to considerable processing power, but at the same time mandate extremely low latency levels, to prevent lag effects. The widespread roll-out of IoT infrastructure, connected cars, robotics and industrial automation systems, to name a few, will also have uncompromising processing and latency demands that are simply not in line with current network architectures.
Transporting data from the network edge back to centralized servers (and vice versa) takes time, and hence adds an unacceptable level of latency to certain applications. All this will mean that fundamental changes need to be made. Rather than having all the processing resources located at the center of the network, a more distributed model is going to be needed in the future. Though the role of centralized servers will unquestionably still be important, this will be complemented by remote servers that are located at the edge of the network - thus making them closer to the users themselves, and thereby mitigating latency issues which is critical for time-sensitive data.
The figures on this speak for themselves. It is estimated that by 2020, approximately 45% of fog computing-generated data will be stored, processed, analyzed and subsequently acted upon either close to or at the edge of the network. Running in tandem with this, data centers will look to start utilizing in-storage processing. Here, in order to alleviate CPU congestion levels and mitigate network latency, data processing resources are going to start being placed closer to the storage drive. This, as a result, will dispense with the need to continuously transfer large quantities of data to and from storage reserves so that it can be processed, with processing tasks instead taking place inside the storage controller.
The transition from traditional data centers to edge-based computing, along with the onset of in-storage processing, will call for a new breed of processor devices. In addition to delivering the operational performance that high throughput, low latency applications will require, these devices will also need to meet the power, cost and space constraints that are going to characterize edge deployment.
Through the highly advanced portfolio of ARMADA® Arm-based multi-core embedded processors, Marvell has been able to supply the industry with processing solutions that can help engineers in facing the challenges that have just been outlined. These ICs combine high levels of integration, elevated performance and low power operation. Using ARMADA as a basis, the company has worked with technology partners to co-develop the MACCHIATObin™ and ESPRESSObin® community boards. The Marvell community boards, which each use 64-bit ARMADA processors, bring together a high-performance single-board computing platform and open source software for developers and designers working with a host of networking, storage and connectivity applications. They give users both the raw processing capabilities and the extensive array of connectivity options needed to develop proprietary edge computing applications from the ground up.
Incorporating a total of 6 MACCHIATObin boards plus a Marvell high density Prestera DX 14 port, 10 Gigabit Ethernet switch IC, the NFV PicoPod from PicoCluster is another prime example of ARMADA technology in action. This ultra-compact unit provides engineers with a highly cost effective and energy efficient platform upon which they can implement their own virtualized network applications. Fully compliant with the OPNFV Pharos specification, it opens up the benefits of NFV technology to a much broader cross section of potential customers, allowing everyone from the engineering teams in large enterprises all the way down to engineers who are working solo to rapidly develop, verify and deploy virtual network functions (VNFs) - effectively providing them with their own ‘datacenter on desktop’.
The combination of Marvell IoT enterprise edge gateway technology with the Google Cloud IoT Core platform is another way via which greater intelligence is being placed at the network periphery. The upshot of this will be that the estimated tens of billions of connected IoT nodes that will be installed over the course of the coming years can be managed in the most operationally efficient manner, offloading much of the workload from the core network’s processing capabilities and only utilizing them when it is completely necessary. Check out www.marvell.com to learn more about how Marvell is processing the world’s data.