Abstract

Our research group has recently developed a new data collection vehicle equipped with various sensors for the synchronous recording of multimodal data including speech, video, driving behavior, and physiological signals. Driver speech is recorded with 12 microphones distributed throughout the vehicle. Face images and a view of the road ahead are captured with three CCD cameras. Driving behavior signals including gas and brake pedal pressures, steering angles, vehicle velocities, and following distances are recorded. Physiological sensors are mounted to measure the drivers’ heart rate, skin conductance, and emotion‐based sweating on the palm of the hand and sole of the foot. The multimodal data are collected while driving on city roads and expressways during four different tasks: reading random four‐character alphanumeric strings, reading words on billboards and signs seen while driving, interacting with a spoken dialogue system to retrieve and play music, and talking on a cell phone with a human navigator using a hands‐free device. Data collection is currently underway. The multimodal database will be published in the future for various research purposes such as noise‐robust speech recognition in car environments, detection of driver stress while driving, and the prediction of driving behaviors for improving intelligent transportation systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call