Despite the success of Deep Learning (DL) serious reliability issues such as non-robustness persist. An interesting aspect is, whether these problems arise due to insufficient tools or fundamental limitations of DL. We study this question from the computability perspective by characterizing the limits the applied hardware imposes. For this, we focus on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. On digital hardware, a conceptual barrier on the capabilities of DL for solving finite-dimensional inverse problems has in fact already been derived. This paper investigates the general computation framework of Blum-Shub-Smale (BSS) machines, describing the processing and storage of arbitrary real values. Although a corresponding real-world computing device does not exist, research and development towards real number computing hardware, usually referred to by “neuromorphic computing”, has increased in recent years. In this work, we show that the framework of BSS machines does enable the algorithmic solvability of finite dimensional inverse problems. Our results emphasize the influence of the considered computing model in questions of accuracy and reliability.
Read full abstract