Abstract

In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on the other hand, it has been used as the justification for their success, especially in the case of Deep Learning (DL) models. However, in recent years, inspiration from the brain has lost its grip on its first role, yet it continues to be proposed in its second role, although we believe it is also becoming less and less defensible. Outside the chorus, there are theoretical proposals that instead identify important demarcation lines between DL and human cognition, to the point of being even incommensurable. In this article we argue that, paradoxically, the partial indifference of the developers of deep neural models to the functioning of biological neurons is one of the reasons for their success, having promoted a pragmatically opportunistic attitude. We believe that it is even possible to glimpse a biological analogy of a different kind, in that the essentially heuristic way of proceeding in modern DL development bears intriguing similarities to natural evolution.

Highlights

  • The analogy between brain and computer has always been a driving force in the field of Artificial Intelligence (AI), as a source of inspiration for the design of any algorithm underlying intelligent systems on the one hand, and as a justification for why some algorithmic technologies were able to achieve similar performance to human ones in solving certain tasks on the other.Both of these accounts of the brain–computer analogy, seem to be called into question by recent advances of modern AI based on deep learning techniques

  • It has emerged that the adoption of algorithmic strategies other than those presumed to be adopted in a biological brain has led to wider success, with results that significantly exceed human performance; on the other hand, the analogy between modern artificial neural models and biological ones appears increasingly inconsistent [1], in the physical structure, in the type of abstraction operated on the input data and in the implemented algorithmic solutions, so much so as to argue that the two cognitive models may be incommensurable [2]

  • We address the issue of providing some explanations of the reason for the failure of the analogy between biological cognitive models and the algorithmic logics adopted by modern deep learning systems, embracing the emerging thesis of incommensurability of the two cognitive models proposed by the scholar Beatrice Fazi [2], which is in contrast to the position of many scholars

Read more

Summary

Introduction

The analogy between brain and computer has always been a driving force in the field of Artificial Intelligence (AI), as a source of inspiration for the design of any algorithm underlying intelligent systems on the one hand, and as a justification for why some algorithmic technologies were able to achieve similar (or superior) performance to human ones in solving certain tasks on the other Both of these accounts of the brain–computer analogy, seem to be called into question by recent advances of modern AI based on deep learning techniques. For several applications there is a requirement that is known as “Explainable AI” (XAI) [3,4,5], which implies that the results of the solution can be understood by humans This need for explanation has prompted research on automatic methods that provide interpretations of individual predictions made by DL models. Through natural selection, produces cognitive models that work, but which can remain opaque to any attempt at explainability

Deep Learning and the Explainability Problem
Incommensurable Cognitive Models
Structural Issues
The Learning Paradigm
Information Abstraction
Variable Tuning and Natural Selection
A World of Different Cognitive Models
Discussion
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.