Abstract

Dietary disorders have increased dramatically in recent decades as a result of poor eating habits. Mobile-based dietary evaluation systems that can take real-time pictures of meals and assess their nutritional content might be very helpful in changing dietary habits and, as a result, leading to a healthier lifestyle. By categorizing the input image of food, this article offers a unique approach for automatically estimating food properties such as components and nutritional value. For reliable food recognition, we use a variety of deep learning models. Aside from image analysis, features and components are computed using semantically relevant phrases extracted from a vast corpus of literature acquired over the Internet. We performed testing on a dataset of 100 classes with an average of 1000 images per class, and we were able to get a top 1 classification rate of up to 85%. As an extension of the benchmark dataset Food-101, a sub-continental food dataset has been created. According to the findings, our proposed technique is equally effective on the core Food-101 dataset and its expansion for sub-continental foods. The proposed solution is implemented as a mobile app for the healthcare business.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call