Abstract

The paper presents a concept of a smart robotic trolley for supermarkets with a multimodal user interface, including sign language and acoustic speech recognition, and equipped with a touchscreen. Considerable progress in hand gesture recognition and automatic speech recognition within the last years has brought to life many human-computer interaction systems. At the moment the level of voiced speech and isolated/static hand gesture automatic recognition quality is quite high. However, continuous or dynamic sign language recognition still remains an unresolved challenge. There exists no automatic recognition system for Russian sign language nowadays. There are also no relevant data for model training. In the present research, we try to fill in this gap for the Russian sign language. We present a Kinect 2.0 based software-hardware complex for collection of multimodal sign language databases with an optical video camera, infrared camera and depth sensor. We describe the architecture of the developed software as well as some details of the collected database. The collected corpus is meant for further development of a Russian sign language recognition system, which will be embedded into a robotic trolley for supermarkets with gestural and speech interfaces. The architecture of the developed system is also presented in the paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.