Abstract

Deep neural networks (DNNs) have revolutionized computer science and are now widely used for neuroscientific research. A hot debate has ensued about the usefulness of DNNs as neuroscientific models of the human visual system; the debate centers on to what extent certain shortcomings of DNNs are real failures and to what extent they are redeemable. Here, we argue that the main problem is that we often do not understand which human functions need to be modeled and, thus, what counts as a falsification. Hence, not only is there a problem on the DNN side, but there is also one on the brain side (i.e., with the explanandum—the thing to be explained). For example, should DNNs reproduce illusions? We posit that we can make better use of DNNs by adopting an approach of comparative biology by focusing on the differences, rather than the similarities, between DNNs and humans to improve our understanding of visual information processing in general.

Highlights

  • Deep neural networks (DNNs) have revolutionized computer science and are widely used for neuroscientific research

  • We reviewed the discussion of whether or not DNNs describe human brain processing and behavior well; that is, the focus was on the explanans

  • In the case of visual crowding, we know what we want to model and how we can validate and falsify models, but that the same cannot be said for visual illusions

Read more

Summary

The explanans

Deep neural networks (DNNs) have revolutionized the field of computer vision, reaching or exceeding human performance in object recognition tasks (LeCun, Bottou, Bengio, & Haffner, 1998; Krizhevsky, Sutskever, & Hinton, 2012; Simonyan & Zisserman, 2015; He, Zhang, Ren, & Sun, 2015) This excellent performance and the analogy between DNNs and the primate visual cortex have caused a fierce discussion about the use of DNNs as neuroscientific models of the brain (e.g., Rosenholtz, 2017; VanRullen, 2017; Majaj & Pelli, 2018; Kubilius, 2018; Cichy & Kaiser, 2019; Kietzmann, McClure, & Kriegeskorte, 2019a; Richards et al, 2019; Firestone, 2020; Griffiths, 2020; Lindsay, 2020; Saxe, Nelli, & Summerfield, 2020; see DiCarlo, Zoccolan, & Rust, 2012). Of information processing in the human visual system remains unclear

The explanandum
The neural explanandum
The psychophysical explanandum
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call