In recent years, advancements in the interaction and collaboration between humans and have garnered significant attention. Social intelligence plays a crucial role in facilitating natural interactions and seamless communication between humans and Artificial Intelligence (AI). To assess AI’s ability to understand human interactions and the components necessary for such comprehension, datasets like Social-IQ have been developed. However, these datasets often rely on a simplistic question-and-answer format and lack justifications for the provided answers. Furthermore, existing methods typically produce direct answers by selecting from predefined choices without generating intermediate outputs, which hampers interpretability and reliability. To address these limitations, we conducted a comprehensive evaluation of AI methods on a video-based Question Answering (QA) benchmark focused on human interactions, leveraging additional annotations related to human responses. Our analysis highlights significant differences between human and AI response patterns and underscores critical shortcomings in current benchmarks. We anticipate that these findings will guide the creation of more advanced datasets and represent an important step toward achieving natural communication between humans and AI.
Read full abstract