Abstract

We have been studying augmented reality (AR)-based gustatory manipulation interfaces and previously proposed a gustatory manipulation interface using generative adversarial network (GAN)-based real time image-to-image translation. Unlike three-dimensional (3D) food model-based systems that only change the color or texture pattern of a particular type of food in an inflexible manner, our GAN-based system changes the appearance of food into multiple types of food in real time flexibly, dynamically, and interactively. In the present paper, we first describe in detail a user study on a vision-induced gustatory manipulation system using a 3D food model and report its successful experimental results. We then summarize identified problems of the 3D model-based system and describe implementation details of the GAN-based system. We finally report in detail the main user study in which we investigated the impact of the GAN-based system on gustatory sensations and food recognition when somen noodles were turned into ramen noodles or fried noodles, and steamed rice into curry and rice or fried rice. The experimental results revealed that our system successfully manipulates gustatory sensations to some extent and that the effectiveness seems to depend on the original and target types of food as well as the experience of each individual with the food.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call