Abstract
Human activity recognition (HAR) has become a key technology for improving the quality of life for individuals with special needs, such as older adults and children with Down Syndrome (CwDS). This study presents a novel application of HAR systems to detect directive behaviors in parent-child interactions, focusing on parents of CwDS during educational activities. Using video data, we developed two models: a Convolutional Neural Network (CNN) and a hybrid CNN-LSTM model, to recognize subtle cues such as physical proximity and verbal interactions. The CNN3D model achieved over 90% accuracy in detecting approach behaviors and around 65% for verbal expressions. The CNN-LSTM model outperformed CNN3D in classifying verbal expressions, achieving over 68% accuracy. These results highlight the potential of deep learning classifiers for analyzing subtle parent-child interactions, offering valuable insights into parent-child dynamics and contributing to the development of assistive tools for studying educational settings.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have