Abstract

In the early days machine translation and speech recognition or generation, were developed as two separate strands of technologies, keeping machine translation, dialogue handling and speech processing isolated as autonomous systems dedicated for one single task. Over time, these different technologies converged as they became all data driven, while in the same, speech and language technologies had to be integrated into complex systems in order to overcome the challenges of interaction between humans and machines. Today human-machine or computer mediated human to human dialogues systems combine language technologies, speech processing and advanced semantics to allow more natural and more spontaneous ways of dialogues. The dialogue module became the glue that brought together and intertwined these technologies, increasing their performance and usability up to a level appropriate for the needs of real world applications. Speech, particularly when enhanced with other modalities, remains the most common and natural way to interact. Applied together with localisation and machine translation, speech will provide access for all people, including the less computer literate, to digital services, and hence ease the advent of the multilingual Digital Single Market.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call