Abstract

In a lipreading system, lip extraction is a fundamental method that directly affects the final speech recognition results. However, most existing systems need to detect some facial features as prior-knowledge to construct the initial contour, and any erroneous feature detection will lead to an incorrect lip extraction. In order to solve this problem, this paper presents a new framework which integrates both global region-based Active Contour Model (ACM) and localized region-based ACM. With the utilization of the proposed framework, the initial contour does not need to be specified according to the speaker facial features before extracting the lip, so that any erroneous extraction introduced by an incorrect initial contour is effectively eliminated. Experimental results show the efficiency of the proposed method in comparison with the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call