Abstract

Food intake analysis is a crucial step to develop an automated dietary monitoring system. Processing of eating sounds deliver important cues for the food intake monitoring. Recent studies on detection of eating activity generally utilize multimodal data from multiple sensors with conventional feature engineering techniques. In this study, we target to develop a methodology for detection of ingestion sounds, namely swallowing and chewing, from the recorded food intake sounds during a meal. Our methodology relies on feature learning in the frequency domain using a convolutional neural network (CNN). Spectrograms extracted from the recorded food intake sounds through a laryngeal throat microphone are fed in to the CNN architecture. Experimental evaluations are performed on our in-house food intake dataset, which includes 8 subject, 10 different food types covering 276 minutes of recordings. The proposed system attains high detection rates of the swallow and chew events with high sensitivity and specificity, and delivers a potential for food intake monitoring under daily life conditions in future studies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.