Abstract

This paper considers how we feel about the content we see or hear. As opposed to the cognitive content information composed of the facts about the genre, temporal content structures and spatiotemporal content elements, we are interested in obtaining the information about the feelings, emotions, and moods evoked by a speech, audio, or video clip. We refer to the latter as the affective content, and to the terms such as happy or exciting as the affective labels of an audiovisual signal. In the first part of the paper, we explore the possibilities for representing and modeling the affective content of an audiovisual signal to effectively bridge the affective gap. Without loosing generality, we refer to this signal simply as video, which we see as an image sequence with an accompanying soundtrack. Then, we show the high potential of the affective video content analysis for enhancing the content recommendation functionalities of the future PVRs and VOD systems. We conclude this paper by outlining some interesting research challenges in the field

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.