Abstract
Personal diet management is key to fighting the obesity epidemic. Recent advances in smartphones and wearable sensor technologies have empowered automated food monitoring through food image processing and eating episode detection, with the goal to conquer drawbacks of traditional food journaling that is labour intensive, inaccurate, and low adherent. In this paper, we present a new interactive mobile system that enables automated food recognition and assessment based on user food images and provides dietary intervention while tracking users' dietary and physical activities. In addition to using techniques in computer vision and machine learning, one unique feature of this system is the realization of real-time energy balance monitoring through metabolic network simulation. As a proof of concept, we have demonstrated the use of this system through an Android application.
Highlights
Healthy diet with balanced nutrition is key to the prevention of overweight and obesity, cardiovascular disease, as well as other life-threatening metabolic comorbidities such as type 2 diabetes, and cancer [1], which warrants personal diet monitoring
In order to make food journaling easier and more accurate, we proposed to develop a novel automated system that integrates diet recoding via interactive food recognition and assessment though smartphone apps, exercise detection via wearable devices, and personalized energy balance monitoring through metabolic network modeling, and just-in-time dietary intervention
We first explored new methodologies in Computer Vision and Machine Learning to address key issues in each of the following components: 1) a comprehensive food image database that contains diverse and abundant images from a large number of food classes, in order to avoid the food discrepancy when training a food-image classifier [7]; 2) a food segmentation strategy that can correctly identify all items in an image from the background regardless the lighting conditions or if the foods are mixed or not [8]; 3) a Machine Learning model to be trained for classifying each segmented item; 4) volume and weight estimation to be performed on each food item, followed by the nutrient analysis [9,10]
Summary
Bruno Vieira Resende e Silva, Milad Ghiasi Rad, Juan Cui1,*, Megan McCabe, and Kaiyue Pan3 1Department of Computer Science and Engineering at University of Nebraska, Lincoln, USA 2Department of Complex Bio Systems at UNL, Lincoln, USA 3Department of Computer Science at McGill University, Canada
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have