Abstract

Ingredient amounts are crucial for food-oriented health systems, but this information is seldom used in food-oriented health systems due to the difficulty of fetching it from online recipes. This study proposes a predictive model named DeepRecipes to extract ingredient amounts from online textual recipes. The model predicts ingredient amounts according to a given recipe’s name and listed ingredients. We train the model on a small set of recipes containing all ingredients and their corresponding amounts. As we can extract the recipe names and ingredients from almost all online recipes, the proposed model can potentially recover ingredient amounts for massive online recipes. We first trained the model on a small set of recipes containing all ingredients and their corresponding amounts. Then, we compared ten models as references for their performances. The performance of DeepRecipes exceeds those of all the comparison models. The model’s mean absolute error (MAE) and mean absolute percentage error (MAPE) are $3.96\times {10}^{-1}$ and 18.57%, respectively, and its APEs are lower than 50% in more than 95% of the total predictions. This accuracy is sufficient for providing rough ingredient amount estimations for food-oriented health systems.

Highlights

  • Recipes are a specific genre of how-to instructions that contain rich food-related information [1]

  • This study focused on ingredient amount prediction, which is a basic food computing task [11]

  • As the recipe names and ingredients are accessible in most online recipes, we use them as DeepRecipes’ inputs to predict the target ingredient amounts

Read more

Summary

INTRODUCTION

Recipes are a specific genre of how-to instructions that contain rich food-related information [1]. A large-scale recipe comprehension dataset called RecipeQA has been proposed by Yagcioglu et al [17] They propose the dataset to train algorithms’ ability in understanding the cooking instructions, which contributes to extracting the ingredient amounts from the second form. As most recipes have one or more meal images related to their corresponding recipes, many visual-based ingredient amount prediction methods have been proposed. These studies can be divided into singleimage [15], [20], [21], two-image [22], and multiple-image [23]-based predictions. We propose a predictive model called DeepRecipes, which predicts ingredient amounts according to recipes’ names and ingredients to utilize the online information. 3) We visualize the recipes and ingredients according to their representations and show the existence of the hidden information in the representations

METHOD
WORD2VEC
INGREDIENT ENCODER
ATTENTION MECHANISM
REGRESSION MODULE
EXPERIMENT This section is organized as follows
COMPARISON OF THE PERFORMANCES OF THE TEN TESTED MODELS
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.