Abstract

Negative effects of inattention on task performance can be seen in many contexts of society and human behavior, such as traffic, work, and sports. In traffic, inattention is one of the most frequently cited causal factors in accidents. In order to identify inattention and mitigate its negative effects, there is a need for quantifying attentional demands of dynamic tasks, with a credible basis in cognitive modeling and neuroscience. Recent developments in cognitive science have led to theories of cognition suggesting that brains are an advanced prediction engine. The function of this prediction engine is to support perception and action by continuously matching incoming sensory input with top-down predictions of the input, generated by hierarchical models of the statistical regularities and causal relationships in the world. Based on the capacity of this predictive processing framework to explain various mental phenomena and neural data, we suggest it also provides a plausible theoretical and neural basis for modeling attentional demand and attentional capacity "in the wild" in terms of uncertainty and prediction error. We outline a predictive processing approach to the study of attentional demand and inattention in driving, based on neurologically-inspired theories of uncertainty processing and experimental research combining brain imaging, visual occlusion and computational modeling. A proper understanding of uncertainty processing would enable comparison of driver's uncertainty to a normative level of appropriate uncertainty, and thereby improve definition and detection of inattentive driving. This is the necessary first step toward applications such as attention monitoring systems for conventional and semi-automated driving.

Highlights

  • “The output of the system is measured, and understood, but it is extremely difficult to specify what the input is that results in the observed output.”

  • How much attention is appropriate, and when? How should the “amount” of attention be defined in the first place? We propose that this fundamental question can be most fruitfully approached from the point of view of the unifying theory of predictive processing (Clark, 2013, 2015; Friston, 2018)

  • We have introduced a definition of attention as appropriate uncertainty in predictive processing, with an application to driving under conditions of intermittent visual sampling

Read more

Summary

INTRODUCTION

“The output of the system is measured, and understood, but it is extremely difficult to specify what the input is that results in the observed output.”. Based on observed feedback (prediction error), attentional control of top-down processes (sampling the generative models) should adapt the number of hypotheses and their dispersion rate to be appropriately “calibrated” to the volatility of the situation, for future occlusions in similar situations. The driver is adapting uncertainty and thereby visual sampling on the basis of the size of the prediction error, which informs about the volatility of the situation (i.e., “uncertainty in the world”) This adjustment of dispersion rate of the hypotheses works in the other direction; with repeated low prediction error, it is appropriate to decrease the number of the hypotheses and thereby eliminate farthest hypotheses and increase the occlusion time

CONCLUSIONS
DATA AVAILABILITY STATEMENT

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.