Abstract

Datasets are crucial when training a deep neural network. When datasets are unrepresentative, trained models are prone to bias because they are unable to generalise to real world settings. This is particularly problematic for models trained in specific cultural contexts, which may not represent a wide range of races, and thus fail to generalise. This is a particular challenge for driver drowsiness detection, where many publicly available datasets are unrepresentative as they cover only certain ethnicity groups. Traditional augmentation methods are unable to improve a model’s performance when tested on other groups with different facial attributes, and it is often challenging to build new, more representative datasets. In this paper, we introduce a novel framework that boosts the performance of detection of drowsiness for different ethnicity groups. Our framework improves Convolutional Neural Network (CNN) trained for prediction by using Generative Adversarial networks (GAN) for targeted data augmentation based on a population bias visualisation strategy that groups faces with similar facial attributes and highlights where the model is failing. A sampling method selects faces where the model is not performing well, which are used to fine-tune the CNN. Experiments show the efficacy of our approach in improving driver drowsiness detection for under represented ethnicity groups. Here, models trained on publicly available datasets are compared with a model trained using the proposed data augmentation strategy. Although developed in the context of driver drowsiness detection, the proposed framework is not limited to the driver drowsiness detection task, but can be applied to other applications.

Highlights

  • The ability of Artificial Intelligence systems (AI) to automate decision-making capabilities in human daily lives is increasing rapidly

  • When it comes to driver drowsiness detection, there are a limited number of publicly available training datasets, and some datasets are not published because of security and privacy reasons preventing the publication of people’s faces

  • In this paper, we introduced a novel framework that can be used to boost the performance of driver drowsiness detection models by reducing bias in the training dataset

Read more

Summary

INTRODUCTION

The ability of Artificial Intelligence systems (AI) to automate decision-making capabilities in human daily lives is increasing rapidly. These systems influence human interaction with the real world and are transforming the future Their decision-making capabilities typically rely on large training datasets that learn and extract useful patterns in an automated way. Ngxande et al.: Bias Remediation in Driver Drowsiness Detection Systems Using GANs shows that nearly 3 700 people die on roads every day This is a particular concern in Africa, which only has 2% of the world’s cars, but has the highest accident rate, that accounts for 20% of road deaths [4]. CNN architectures require a large amount of training data to learn a suitable representation for a given task When it comes to driver drowsiness detection, there are a limited number of publicly available training datasets, and some datasets are not published because of security and privacy reasons preventing the publication of people’s faces.

AND RELATED WORK
EXPERIMENTAL RESULTS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.