This paper introduces a deep learning methodology for analyzing audience engagement in online video events. The proposed deep learning framework consists of six layers and starts with keyframe extraction from the video stream and the participants’ face detection. Subsequently, the head pose and emotion per participant are estimated using the HopeNet and JAA-Net deep architectures. Complementary to video analysis, the audio signal is also processed using a neural network that follows the DenseNet-121 architecture. Its purpose is to detect events related to audience engagement, including speech, pauses, and applause. With the combined analysis of video and audio streams, the interest and attention of each participant are inferred more accurately. An experimental evaluation is performed on a newly generated dataset consisting of recordings from online video events, where the proposed framework achieves promising results. Concretely, the F1 scores were 79.21% for interest estimation according to pose, 65.38% for emotion estimation, and 80% for sound event detection. The proposed framework has applications in online educational events, where it can help tutors assess audience engagement and comprehension while hinting at points in their lectures that may require further clarification. It is effective for video streaming platforms that want to provide video recommendations to online users according to audience engagement.