Machine learning (ML) is nowadays embedded in several computing devices, consumer electronics, and cyber-physical systems. Smart sensors are deployed everywhere, in applications such as wearables and perceptual computing devices, and intelligent algorithms power our connected world. These devices collect and aggregate volumes of data, and in doing so, they augment our society in multiple ways; from healthcare, to social networks, to consumer electronics, and many more. To process these immense volumes of data, ML is emerging as the de facto analysis tool that powers several aspects of our Big Data society. Applications spanning from infrastructure (smart cities, intelligent transportation systems, smart grids, and to name a few), to social networks and content delivery, to e-commerce and smart factories, and emerging concepts such as self-driving cars and autonomous robots, are powered by ML technologies. These emerging systems require real-time inference and decision support; such scenarios, therefore, may use customized hardware accelerators, are typically bound by limited resources, and are restricted to limited connectivity and bandwidth. Thus, near-sensor computation and near-sensor intelligence have started emerging as necessities to continue supporting the paradigm shift of our connected world. The need for real-time intelligent data analytics (especially in the era of Big Data) for decision support near the data acquisition points emphasizes the need for revolutionizing the way we design, build, test, and verify processors, accelerators, and systems that facilitate ML (and deep learning, in particular) implemented in resource-constrained environments for use at the edge and the fog. As such, traditional von Neumann architectures are no longer sufficient and suitable, primarily because of limitations in both performance and energy efficiency caused especially by large amounts of data movement. Furthermore, due to the connected nature of such systems, security and reliability are also critically important. Robustness, therefore, in the form of reliability and operational capability in the presence of faults, whether malicious or accidental, is a critical need for such systems. Moreover, the operating nature of these systems relies on input data that is characterized by the four “V’s”: velocity (speed of data generation), variability (variable forms and types), veracity (unreliable and unpredictable), and volume (i.e., large amounts of data). Thus, the robustness of such systems needs to consider this issue as well. Furthermore, robustness in terms of security, and in terms of reliability to hardware and software faults, in particular, besides their importance when it comes to safety-critical applications, is also a positive factor in building trustworthiness toward these disrupting technologies from our society. To achieve this envisioned robustness, we need to refocus on problems such as design, verification, architecture, scheduling and allocation policies, optimization, and many more, for determining the most efficient, secure, and reliable way of implementing these novel applications within a robust, resource-constrained system, which may or may not be connected. This special issue, therefore, addresses a key aspect of fog and edge-based ML algorithms; robustness (as defined above) under resource-constraint scenarios. The special issue presents emerging works in how we design robust systems, both in terms of reliability as well as fault tolerance and security, while operating with a limited number of resources, and possibly in the presence of harsh environments that may eliminate connectivity and pollute the input data.
Read full abstract