Abstract
Accurate detection of individual intake gestures is a key step towards automatic dietary monitoring. Both inertial sensor data of wrist movements and video data depicting the upper body have been used for this purpose. The most advanced approaches to date use a two-stage approach, in which (i) frame-level intake probabilities are learned from the sensor data using a deep neural network, and then (ii) sparse intake events are detected by finding the maxima of the frame-level probabilities. In this study, we propose a single-stage approach which directly decodes the probabilities learned from sensor data into sparse intake detections. This is achieved by weakly supervised training using Connectionist Temporal Classification (CTC) loss, and decoding using a novel extended prefix beam search decoding algorithm. Benefits of this approach include (i) end-to-end training for detections, (ii) simplified timing requirements for intake gesture labels, and (iii) improved detection performance compared to existing approaches. Across two separate datasets, we achieve relative F1 score improvements between 1.9% and 6.2% over the two-stage approach for intake detection and eating/drinking detection tasks, for both video and inertial sensors.
Highlights
A CCURATE information on dietary intake forms the basis of assessing a person’s diet and delivering dietary interventions
We can see that the single stage approach generally yields higher performance than the thresholding and two stage approaches: Relative improvements range between 2.0% (0.858→0.875) and 3.5% (0.781→0.808) over two-stage versions of our own architectures, and between 3.3% (0.783→0.808) and 10.4% (0.793→0.875) over our implementations of the SOTA
Thresholding exclusively relies on one gyroscope channel, while the deep learning models build on a larger number of parameters
Summary
A CCURATE information on dietary intake forms the basis of assessing a person’s diet and delivering dietary interventions. To date, such information is typically sourced through memory recall or manual input, for example via dietitians [1] or smartphone apps used to log meals. Recent research has investigated how dietary monitoring can be partially automated using sensor data and machine learning [3]. Detection of individual intake gestures in particular is a key step towards automatic dietary monitoring. Wrist-worn inertial sensors provide an unobtrusive way to detect these gestures
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.