Abstract

The concept of “representation” is used broadly and uncontroversially throughout neuroscience, in contrast to its highly controversial status within the philosophy of mind and cognitive science. In this paper I first discuss the way that the term is used within neuroscience, in particular describing the strategies by which representations are characterized empirically. I then relate the concept of representation within neuroscience to one that has developed within the field of machine learning (in particular through recent work in deep learning or “representation learning”). I argue that the recent success of artificial neural networks on certain tasks such as visual object recognition reflects the degree to which those systems (like biological brains) exhibit inherent inductive biases that reflect the structure of the physical world. I further argue that any system that is going to behave intelligently in the world must contain representations that reflect the structure of the world; otherwise, the system must perform unconstrained function approximation which is destined to fail due to the curse of dimensionality, in which the number of possible states of the world grows exponentially with the number of dimensions in the space of possible inputs. An analysis of these concepts in light of philosophical debates regarding the ontological status of representations suggests that the representations identified within both biological and artificial neural networks qualify as legitimate representations in the philosophical sense.

Highlights

  • The ontological status and epistemic utility of mental representations are topics of enduring debate within the philosophy of mind

  • My goal in this paper is to argue that this empirical and computational work provides important insights regarding philosophical questions about the ontological status of representations in the brain

  • I will argue that we can gain substantial traction in understanding representations from work in machine learning that has focused on the learning of representations, using the particular example of the recognition of visual objects

Read more

Summary

Introduction

The ontological status and epistemic utility of mental representations are topics of enduring debate within the philosophy of mind. To understand how this is possible, it is useful to view the job of a learning machine (such as a neural network or a biological organism) in terms of function approximation—that is, approximating the function that relates the state of the world to the most appropriate actions in a way that maximizes some objective (such as the long-run value of the outcomes of those actions).

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call