This paper presents AdaptaFood, a system to adapt recipes to specific dietary constraints. This is a common societal issue due to various dietary needs arising from medical conditions, allergies, or nutritional preferences. AdaptaFood provides recipe adaptations from two inputs: a recipe image (a fine-tuned image-captioning model allows us to extract the ingredients) or a recipe object (we extract the ingredients from the recipe features). For the adaptation, we propose to use an attention-based language sentence model based on BERT to learn the semantics of the ingredients and, therefore, discover the hidden relations among them. Specifically, we use them to perform two tasks: (1) align the food items from several sources to expand recipe information; (2) use the semantic features embedded in the representation vector to detect potential food substitutes for the ingredients. The results show that the model successfully learns domain-specific knowledge after re-training it to the food computing domain. Combining this acquired knowledge with the adopted strategy for sentence representation and food replacement enables the generation of high-quality recipe versions and dealing with the heterogeneity of different-origin food data.
Read full abstract