Abstract

Automatic dietary monitoring (ADM) offers new perspectives to reduce the self-reporting burden for participants in diet coaching programs. This paper presents an approach to predict weight of individual bites taken. We utilize a pattern recognition procedure to spot chewing cycles and food type in continuous data from an ear-pad chewing sound sensor. The recognized information is used to predict bite weight. We present our recognition procedure and demonstrate its operation on a set of three selected foods of different bite weights. Our evaluation is based on chewing sensor data of eight healthy study participants performing 504 habitual bites in total. The sound-based chewing recognition achieved recalls of 80% at 60%-70% precision. Food classification of chewing sequences resulted in an average accuracy of 94%. In total, 50 variables were derived from the chewing microstructure, and were analyzed for correlations between chewing behavior and bite weight. A subset of four variables was selected to predict bite weight using linear food-specific models. Mean weight prediction error was lowest for apples (19.4%) and largest for lettuce (31%) using the sound-based recognition. We conclude that bite weight prediction using acoustic chewing recordings is a feasible approach for solid foods, and should be further investigated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call