Abstract
This research proposed an automatic mechanism to refine the lecture video by composing meaningful video clips from multiple cameras. In order to maximize the captured video information and produce a suitable lecture video for learners, video content should be analysed by considering both visual and audio information firstly. Meaningful events were then detected by extracting lecturer’s and learners’ behaviours according to teaching and learning principles in class. An event-driven camera switching strategy was derived to change the camera view to a meaningful one based on the finite state machine. The final lecture video was then produced by composing all meaningful video clips. The experiment results show that learners felt interested and comfortable while watching the lecture video, and also agreed with the meaningfulness of the selected video clips.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.