Abstract
Driver distraction and fatigue have become one of the leading causes of severe traffic accidents. Hence, driver inattention monitoring systems are crucial. Even with the growing development of advanced driver assistance systems and the introduction of third-level autonomous vehicles, this task is still trending and complex due to challenges such as the illumination change and the dynamic background. To reliably compare and validate driver inattention monitoring methods, a limited number of public datasets are available. In this paper, we put forward a public, well-structured and complete dataset, named Multiview, Multimodal and Multispectral Driver Action Dataset (3MDAD). The dataset is mainly composed of two sets: the first one recorded in daytime and the second one at nighttime. Each set consists of two synchronized data modalities, both from frontal and side views. More than 60 drivers are asked to execute 16 in-vehicle actions under a wide range of naturalistic driving settings. In contrast to other public datasets, 3MDAD presents multiple modalities, spectrums and views under different time and weather conditions. To highlight the utility of our dataset, we independently analyze the driver action recognition results adapted to each modality and those obtained of several combinations of modalities.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.