Abstract

The role of music on driving process had been discussed in the context of driver assistance as an element of security and comfort. Throughout this document, we present the development of an audio recommender system for the use by drivers, based on facial expression analysis. This recommendation system has the objective of increasing the attention of the driver by the election of specific music pieces. For this pilot study, we start presenting an introduction to audio recommender systems and a brief explanation of the function of our facial expression analysis system. During the driving course the subjects (seven participants between 19 and 25 years old) are stimulated with a chosen group of audio compositions and their facial expressions are captured via a camera mounted in the car's dashboard. Once the videos were captured and recollected, we proceeded to analyse them using the FACET™ module of the biometric capture platform iMotions™. This software provides us with the expression analysis of the subjects. Analysed data is postprocessed and the data obtained were modelled on a quadratic surface that was optimized based on the known cestrum and tempo of the songs and the average evidence of emotion. The results showed very different optimal points for each subject, that indicates different type of music for optimizing driving attention. This work is a first step for obtaining a music recommendation system capable to modulate subject attention while driving.

Highlights

  • The role of music listening in the vehicle while driving represents an important element of the design of the user experience

  • IMotions FACET module [10] is used for emotion recognition by facial expression

  • These differences indicate a variability in the number of emotions expressed across the driving task. This difference could be associated to environmental conditions, but the differences founded in every subject indicates a personal preference to music because every subject have similar driving conditions

Read more

Summary

Introduction

The role of music listening in the vehicle while driving represents an important element of the design of the user experience. Kinoshita [5] describes a system based on the evaluation of a visual simulated scenario for the election of a playlist with a specific musical genre This idea was improved by Krishnan [6], that include the metadata of the song, the musical features and demographic data of the driver. A decrease in the number of emotions detected while listening to music could be used an approximation of the effectiveness of the soundtrack heard by the user as an attention modulator. For this purpose, iMotions FACET module [10] is used for emotion recognition by facial expression

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.