Abstract

Typically developing infants, between the corrected age of 9-20 weeks, produce fidgety movements. These movements can be identified with the General Movement Assessment, but their identification requires trained professionals to conduct the assessment from video recordings. Since trained professionals are expensive and their demand may be higher than their availability, computer vision-based solutions have been developed to assist practitioners. However, most solutions to date treat the problem as a direct mapping from video to infant status, without modeling fidgety movements throughout the video. To address that, we propose to directly model infants' short movements and classify them as fidgety or non-fidgety. In this way, we model the explanatory factor behind the infant's status and improve model interpretability. The issue with our proposal is that labels for an infant's short movements are not available, which precludes us to train such a model. We overcome this issue with active learning. Active learning is a framework that minimizes the amount of labeled data required to train a model, by only labeling examples that are considered "informative" to the model. The assumption is that a model trained on informative examples reaches a higher performance level than a model trained with randomly selected examples. We validate our framework by modeling the movements of infants' hips on two representative cohorts: typically developing and at-risk infants. Our results show that active learning is suitable to our problem and that it works adequately even when the models are trained with labels provided by a novice annotator.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.