Abstract

Infant cry provides useful clinical insights for caregivers to make appropriate medical decisions, such as in obstetrics. However, robust infant cry detection in real clinical settings (e.g. obstetrics) is still challenging due to the limited training data in this scenario. In this paper, we propose a scene adaption framework (SAF) including two different learning stages that can quickly adapt the cry detection model to a new environment. The first stage uses the acoustic principle that mixture sources in audio signals are approximately additive to imitate the sounds in clinical settings using public datasets. The second stage utilizes mutual learning to mine the shared characteristics of infant cry between the clinical setting and public dataset to adapt the scene in an unsupervised manner. The clinical trial was conducted in Obstetrics, where the crying audios from 200 infants were collected. The experimented four classifiers used for infant cry detection have nearly 30% improvement on the F1-score by using SAF, which achieves similar performance as the supervised learning based on the target setting. SAF is demonstrated to be an effective plug- and-play tool for improving infant cry detection in new clinical settings. Our code is available at https://github.com/contactless-healthcare/Scene-Adaption-for-Infant-Cry-Detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.