Typically developing infants, between the corrected age of 9-20 weeks, produce fidgety movements. These movements can be identified with the General Movement Assessment, but their identification requires trained professionals to conduct the assessment from video recordings. Since trained professionals are expensive and their demand may be higher than their availability, computer vision-based solutions have been developed to assist practitioners. However, most solutions to date treat the problem as a direct mapping from video to infant status, without modeling fidgety movements throughout the video. To address that, we propose to directly model infants' short movements and classify them as fidgety or non-fidgety. In this way, we model the explanatory factor behind the infant's status and improve model interpretability. The issue with our proposal is that labels for an infant's short movements are not available, which precludes us to train such a model. We overcome this issue with active learning. Active learning is a framework that minimizes the amount of labeled data required to train a model, by only labeling examples that are considered "informative" to the model. The assumption is that a model trained on informative examples reaches a higher performance level than a model trained with randomly selected examples. We validate our framework by modeling the movements of infants' hips on two representative cohorts: typically developing and at-risk infants. Our results show that active learning is suitable to our problem and that it works adequately even when the models are trained with labels provided by a novice annotator.
Read full abstract