Abstract

This paper presents SPHERE, a project aimed at the realization of an integrated framework to abstract the hardware complexity of interconnected, modern system-on-chips (SoC) and simplify the management of their heterogeneous computational resources. The SPHERE framework leverages hypervisor technology to virtualize computational resources and isolate the behavior of different subsystems running on the same platform, while providing safety, security, and real-time communication mechanisms. The main challenges addressed by SPHERE are discussed in the paper along with a set of new technologies developed in the context of the project. They include isolation mechanisms for mixed-criticality applications, predictable I/O virtualization, the management of time-sensitive networks with heterogeneous traffic flows, and the management of field-programmable gate arrays (FPGA) to provide efficient implementations for cryptography modules, as well as hardware acceleration for deep neural networks. The SPHERE architecture is validated through an autonomous driving use-case.

Highlights

  • Today’s commercial-of-the-shelf (COTS) heterogeneous multicore platforms offer great opportunities for developing high-performance embedded computing systems

  • It leverages hypervisor technology to virtualize computational resources and isolate the behavior of different subsystems running on the same platform, while providing safety, security, and real-time communication mechanisms

  • In [26], a novel heuristic scheduler is introduced for the so-called Stream Reservation (SR) classes introduced by the Ethernet Audio Video Bridging (AVB) standards and rolled into the IEEE 802.1Q-2018 standard

Read more

Summary

INTRODUCTION

Today’s commercial-of-the-shelf (COTS) heterogeneous multicore platforms offer great opportunities for developing high-performance embedded computing systems. It leverages hypervisor technology to virtualize computational resources and isolate the behavior of different subsystems running on the same platform, while providing safety, security, and real-time communication mechanisms. TIME-PREDICTABLE I/O VIRTUALIZATION SPHERE aims at providing a predictable I/O communication mechanism, ensuring that the maximum lateness is bounded To achieve this goal in a virtualized environment, the I/O handling strategy needs to allow multiple VMs to share one or more I/O devices. In SPHERE, this is done using a predictable, software-based I/O virtualization mechanism, whose structure is shown, which refers to a case where the network device is shared among different VMs. Applications running in the VMs perform I/O requests by interacting with a para-virtualized application programming interface (API) offered by the hypervisor, including function calls to send. Combining direct dispatch with cache-coloring protection, instead, allowed achieving an average-case delay of 240 us, and containing the worst-case delay within 1390 us, even with the hypervisor and under memory aggression

FPGA MANAGEMENT
VIRTUALIZED TSN COMMUNICATIONS
CASE STUDY
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call