Abstract

It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system “mentally” gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an “intelligent guess”, implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results.

Highlights

  • As of 2016, 39.6% U.S adults were obese (BMI ≥ 30) [1]

  • We presented an image-based automatic method for food volume estimation, aimed towards solving a long-standing problem in nutrition science where dietary assessment is subjective and time-consuming

  • We showed food images with different volumes can be placed into the same class for network training as long as they have similar normalized volume

Read more

Summary

Introduction

As of 2016, 39.6% U.S adults were obese (BMI ≥ 30) [1]. In order to control obesity and related chronic diseases, there is a pressing need to assess accurately the energy and nutrient intake of individuals in their daily lives. A dietary assessment is conducted using self-report in which individuals report their consumed foods and portion sizes. This method is standard and has been utilized for decades, numerous studies have indicated that it is inaccurate and biased [2,3]. With the development of smartphones and wearable devices, dietary assessment can be performed without fully depending on individuals’ memory and willingness to report their own intake. Arab et al [5] developed an automated image capture method to aid dietary recall using a mobile phone; Sun et al [6] designed a wearable camera system called eButton for objective and passive dietary assessment; Jobarteh et al [7] developed an eyeglass attachment containing an accelerometer and a camera to record dietary events automatically; and Liu et al [8] performed food intake monitoring using a sensor worn on top of an ear

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call