Abstract

In general, most of the current augmented reality systems can combine 3D virtual scenes with live reality, and users usually interact with 3D objects of the augmented reality (AR) system through image recognition. Although the image-recognition technology has matured enough to allow users to interact with the system, the interaction process is usually limited by the number of patterns used to identify the image. It is not convenient to handle. To provide a more flexible interactive manipulation mode, this study imports the speech-recognition mechanism that allows users to operate 3D objects in an AR system simply by speech. In terms of implementation, the program uses Unity3D as the main development environment and the AR e-Desk as the main development platform. The AR e-Desk interacts through the identification mechanism of the reacTIVision and its markers. We use Unity3D to build the required 3D virtual scenes and objects in the AR e-Desk and import the Google Cloud Speech suite to the AR e-Desk system to develop the speech-interaction mechanism. Then, the intelligent AR system is developed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.