Abstract
A major difficulty in the development of methodologies for segmentation and classification in automatic recognition of continuous speech is the determination of objective, reliable performance statistics. Compounding this difficulty is the large amount of data necessary to make reasonably accurate performance estimates. The system to be described provides for concurrent objective evaluation of up to five independent segmentation/classification methods against a single, carefully transcribed referent. A basic assumption of the evaluator is that the systems to be compared, as well as the referent, can each use the same digital data as input. Violation of this assumption would lead to time-shift errors, and objective comparison among systems would be exceedingly difficult. For segmentation, the evaluator provides first-order statistics, at the phonetic, class and summary levels, in the form of highly concise tables for the following four types of errors: 1) Missed events; 2) Adventitious events; 3) Misplaced events; and 4) Adventitious and misplaced events. For classification, first-order statistics are derived in the form of confusion matrices at the phonetic, class and summary levels. While the system is still in the developmental process, it is operational and currently used. Examples of output will be presented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Acoustics, Speech, and Signal Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.