Computational approaches hold significant promise for enhancing diagnosis and therapy in child and adolescent clinical practice. Clinical procedures heavily depend n vocal exchanges and interpersonal dynamics conveyed through speech. Research highlights the importance of investigating acoustic features and dyadic interactions during child development. However, observational methods are labor-intensive, time-consuming, and suffer from limited objectivity and quantification, hindering translation to everyday care. We propose a novel AI-based system for fully automatic acoustic segmentation of clinical sessions with autistic preschool children. We focused on naturalistic and unconstrained clinical contexts, which are characterized by background noise and data scarcity. Our approach addresses key challenges in the field while remaining non-invasive. We carefully evaluated model performance and flexibility in diverse, challenging conditions by means of domain alignment. Results demonstrated promising outcomes in voice activity detection and speaker diarization. Notably, minimal annotation efforts -just 30 seconds of target data- significantly improved model performance across all tested conditions. Our models exhibit satisfying predictive performance and flexibility for deployment in everyday settings. Automating data annotation in real-world clinical scenarios can enable the widespread exploitation of advanced computational methods for downstream modeling, fostering precision approaches that bridge research and clinical practice.
Read full abstract