Sensors for the perception of multimodal stimuli-ranging from the five senses humans possess and beyond-have reached an unprecedented level of sophistication and miniaturization, raising the prospect of making man-made large-scale complex systems that can rival nature a reality. Artificial intelligence (AI) at the edge aims to integrate such sensors with real-time cognitive abilities enabled by recent advances in AI. Such AI progress has only been achieved by using massive computing power which, however, would not be available in most distributed systems of interest. Nature has solved this problem by integrating computing, memory and sensing functionalities in the same hardware so that each part can learn its environment in real time and take local actions that lead to stable global functionalities. While this is a challenging task by itself, it would raise a new set of security challenges when implemented. As in nature, malicious agents can attack and commandeer the system to perform their own tasks. This article aims to define the types of systemic attacks that would emerge, and introduces a multiscale framework for combatting them. A primary thesis is that edge AI systems have to deal with unknown attack strategies that can only be countered in real time using low-touch adaptive learning systems.This article is part of the theme issue 'Emerging technologies for future secure computing platforms'.
Read full abstract