Abstract

<div class="section abstract"><div class="htmlview paragraph">An active sound design (ASD) technique enables the implementation of a specific sound in addition to the real engine/e-motor sound in a vehicle. However, it is difficult to satisfy the various needs of customers because it can provide only a few sounds designed by the manufacturer. This paper presents the method of providing the appropriate driving sound and soundscape in an electric vehicle according to the driver’s emotion and driving environment in real-time. For this purpose, it is studied how to construct a driving sound library from the various sound sources and how to recognize a driver's total emotion from the multi-modal data such as facial expression, heart rate, and electrodermal activity using the CNN and support vector machine algorithms. Then it is discussed how to generate the driving sound of electric vehicle according to the driver’s emotion. Using these methods, a personalized driving sound suitable to the driver's total emotion is provided by using the ASD system of electric vehicle in real-time. Additionally, it is studied how to recognize the driving environment from the outside image and match the soundscape (e.g. effect sound, background music and so on) to playback in an audio amplifier using the CNN and machine learning algorithms. Finally, it shows the demonstration of a prototype system and the people's response in a real driving situation. It is expected that this system can provide a new user experience through the personalized sound in electric vehicle by understanding the customer's feeling and driving situation.</div></div>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call