Abstract

The multimodal interaction is becoming richer in last years thanks to the increasing evolution of mobile devices (smartphones/tablets) and their embedded sensors including accelerometer, gyroscope, global positioning system, near field communication and proximity sensors. Using such sensors, either sequentially or simultaneously, to interact with applications ensures an intuitive interaction and the user acceptance. Today, the development of multimodal mobile systems incorporating input and output modalities through sensors is a long and difficult task. Despite the facts that numerous model-based approaches have emerged and are supposed to simplify the multimodal mobile applications engineering, the design and implementation of these applications are still generally in an ad hoc way. In order to explain this situation, the present paper reviews, discusses, and analyses different model-based approaches proposed to develop multimodal mobile applications. The analysis considers not only the modelling and generation of mobile multimodality features, but also the inclusion of model-driven engineering features such as guidance and model reuse that allows the appropriate use of models and benefits from them. Our aim is to identify the current gaps that hinder the facility and acceleration of the multimodal mobile applications development using model-based approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.