Abstract
With the increasing complexity of recent autonomous platforms, there is a strong demand to better utilize system resources while satisfying stringent real-time requirements. Embedded virtualization is an appealing technology to meet this demand. It enables the consolidation of real-time systems with different criticality levels on a single hardware platform by enforcing temporal isolation. On multi-core platforms, however, shared hardware resources, such as caches and memory buses, weaken this isolation. In particular, due to the resulting cache interference, a large last-level cache in recent processors can easily jeopardize the timing predictability of real-time tasks due to cache interference. While researchers in the real-time systems community have developed solutions to tackle this problem, existing cache management schemes reveal two major limitations when used in a clustered multi-core embedded system. The first is the cache co-partitioning problem, which can lead to wrong cache allocation and cache underutilization. The second is the cache interference of inter-virtual-machine (VM) communication because prior work has considered only independent tasks. This paper presents a cluster-aware real-time cache allocation scheme to address these problems. The proposed scheme takes into account the cluster information of the system, and finds the cache allocation that satisfies the timing and memory requirements of tasks. The scheme also maximizes slack time to meet task deadline, which brings flexibility and resilience to unexpected events. Tasks using inter-VM communication are also provided with guaranteed blocking time and cache isolation. We have implemented a prototype of our scheme on an Nvidia TX2 clustered multi-core platform and evaluated the effectiveness of our scheme over cluster-unaware approaches.
Highlights
Embedded system virtualization offers an opportunity to significantly reduce space, power, and cost requirements by consolidating multiple systems into a single hardware platform
It is worth noting that the two existing schemes have other features, e.g., virtual machines (VMs) parameter design in CAVM and bandwidth allocation in CaM, but we limit our focus to their cache allocation part
Tasks are pre-allocated to virtual CPUs (VCPUs) based on the worst-fit decreasing (WFD) heuristic, and in accordance with our system model, they cannot be moved to other VCPUs during cache allocation
Summary
Embedded system virtualization offers an opportunity to significantly reduce space, power, and cost requirements by consolidating multiple systems into a single hardware platform. Partitioning hypervisors, such as Jailhouse [1], QuestV [27], and QplusHyper [25], have established a strong foundation for this purpose They address the problems of complex hierarchical scheduling and timing analysis issues by strict partitioning of CPU and memory, and offer real-time performance close to native systems. They can satisfy the increasing demand for mixed-criticality support, by co-hosting high-critical systems, e.g., certified real-time OS, together with low-critical systems, e.g., Linux and Android, on the same platform. The Cortex A57 cluster has four CPU cores and a shared L2 cache of 2MB
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.