Abstract

This paper proposes a maximum confidence measure-based closed-loop dual-microphone beamforming direction and beamwidth steering algorithm to facilitate robust speech recognition. This technique involves feeding back the confidence measure reported through a back-end speech recognizer, automatically steering a front-end microphone array to optimally identify the correct speaker direction and array beamwidth. The technique enables users to move around freely and directly improves overall system performance. The experimental results from a voice command task show that the proposed approach demonstrated superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call