Abstract

User Experience Design (UX Design) comes from focusing on how products, in reality, affect the user's experience. In particular, the design of multi-modal interfaces for blind people facilitates the flexible and natural product or service capacity and improves blind people's interaction by overcoming the various existing constraints associated with any particular interaction. There have been various attempts to help visually impaired people appreciation of visual artwork, including multi-modal associations. However, these methods can only provide general information in terms of edge and pattern recognition by the sense of touch and restrained by the availability and number of specially developed artworks. We propose a novel method explaining visual artworks through image caption generation using artificial intelligence (AI) to improve artwork accessibility. This method can objectively describe any impressionism artwork used as a standalone description of art interpretation for blind people or can aide tactile-based methods. Based on end-to-end learning with a deep neural network, an encoder-decoder architecture model is adopted, and comprehensive experiments perform to confirm the stability of generated image captioning for stylized MS-COCO datasets with impressionism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call