Abstract

In the upcoming 21st century, the demand for ‘‘global communication among speakers of different languages’’ will become more and more pressing. According to recent survey results, spontaneous speech translation will come into practical use by about 2010–2020. Despite the great expectations of society for efficient speech translation systems, there remain a number of difficult problems related to acoustic and linguistic phenomena yet to be overcome. In particular, the variety of spontaneous speech makes it difficult to simply extend current work done on read speech. For example, speech rate can vary considerably and there are often disfluencies and diversities of pronunciation (with ensuing spectral variations) resulting from different speaker characteristics and situations. This presentation describes research targets for the year 2000 for spontaneous speech translation, emphasizing improvements in speech recognition, prosody processing, synthesis of natural-sounding speech, and system integration, for spoken-language translation between Japanese and other languages such as English, Korean, and German. This research activity is being carried out under international collaborations by the international Consortium for Speech Translation Advanced Research (C-STAR II), which was established in 1994 to begin research activities aimed at an international experiment on multilanguage speech translation planned for 1999.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.