Abstract

One of the most difficult challenges in developing an Autonomous Vehicle (AV) is the requirement that it will perform no worse than a human. Human drivers use both eyes and ears as visual and audible sensors. However, state-of-the-art AV sensors do not capture acoustic signals, and as a result, decision making algorithms do not use the information that is contained in the acoustic scene. This work is focused on detecting the arrival direction of Emergency Vehicles (EVs). The EV's siren signal is recorded using a microphone array and adjusted Multiple Signal Classification (MUSIC) algorithms are used to estimate the Direction of Arrival (DoA) of the EV. Experiments show DoA estimation results using Microelectromechanical system (MEMS) microphones. Performance using both alternatives of an internal and external microphone array architecture were investigated. In the case of external microphones, free-field model was used to calculate the steering vector with high spatial resolution. However, external microphones require protection against rain, wind, etc. For this reason, the option of using internal microphones was examined. In the case of the internal microphone arrays, the free-field model is no longer valid and therefore estimated Relative Transfer Function (RTF)s are incorporated. The RTFs are estimated using signals that are generated from external source in a finite set of locations, and recorded using the internal microphones in a quite environment. For this reason, in the case of the internal microphones, spatial scan of the MUSIC algorithm is performed using a lower spatial resolution than in the case of the external microphones. The RTF estimation accuracy using Wiener filter is compared to the one using the Generalized Eigenvalue Decomposition (GEVD) method. Experiments show results of the external microphone array mounted on the roof of a car that was parked near a hospital. The EVs were ambulance vehicles recorded arriving to or departing from the hospital. For example, the case where an EV ap­proached from the opposite lane and made its way to the hospital is shown. Basically, at first the DoA comes from the frontal direction, and then switches backwards. This work shows the MUSIC spectrum, the estimated DoA, and the selected frequency for these cases. It is shown that at a certain time, the peak of the MUSIC spectrum shifted from values near 360 ^\circ to values near 180 ^\circ . In this case, the decision of an AV should be to continue normal driving and not to yield to the EV. Due to mirroring effect of the porch hardware that is physically placed on the sides of the microphone array board, in cases where the EV was on the left side, the peak values appeared at angles that corresponded to the right side. Nevertheless, it was easy to determine when the EV was in front or behind the car. Additional experiments show results using internal microphones with only front and back RTFs incorporated as steering vectors. The MUSIC spectrum and DoA estimation results were shown for different DoAs of the EV. Two internal microphone array configurations were demonstrated, a broadside array and an end-fire array. The results have shown that using the end-fire array it was impossible to determine whether the EV was behind or in front of the car. The DoA was estimated better using the broadside array. This result was surprising, since one would expect that the symmetry of the broadside array around the driving direction would have caused an ambiguity for waves approaching from the front or from the back. However, the RTFs were estimated using each microphone with less distortion using the broadside array than using the end-fire array. Regardless, the difficulty of estimating the DoA and the lower angular resolution was greater in the case of the internal microphone array than in the case of the external one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call