Abstract

The increasing amounts of media becoming available in converged digital broadcast and mobile broadband networks will require intelligent interfaces capable of personalizing the selection of content. Aiming to capture the mood in the content, we construct a semantic space based on tags, frequently used to describe emotions associated with music in thelast.fmsocial network. Implementing latent semantic analysis (LSA), we model the affective context of songs based on their lyrics, and apply a similar approach to extract moods from BBC synopsis descriptions of TV episodes using TV-Anytime atmosphere terms. Based on our early results, we propose that LSA could be implemented as machinelearning method to extract emotional context and model affective user preferences.

Highlights

  • When both digital broadcast streams and the content itself are adapted to the small screen size of handheld devices, it will literally translate into hundreds of channels featuring rapidly changing mobisodes and location-aware media, where it might no longer be feasible to select programs by scrolling through an electronic program guide

  • The synopsis triggers a concentration of passive pleasant valence elements related to the words “soft, mellow” combined with “happy.” In this context the tag “cool” comes out as it has a strong association to the word air contained in the synopsis, while the activation of the tag “aggressive” appears less explainable. This cluster of pleasant elements is lacking in the latent semantic analysis (LSA) analysis of the program “Super Vets” which instead evokes a strong emotional contrast based on the text: At the Royal Vet College Louis the dog needs emergency surgery after a life threatening bleed in his chest and the vets need to find out what is causing the cat fits, where both pleasant and unpleasant active terms like “happy” and “sad” stand out in combination with strong emotions reflected by the tag “romantic.” And as can be seen from programs like “The flying gardener” and “Super Vets” (Figure 10), the correlation between the synopsis and the chosen tags might often trigger both complementary elements as well as contrasting emotional components

  • Projecting BBC synopsis descriptions into an LSA space, using both last.fm tags and TV-Anytime atmosphere terms as emotional buoys Figures 11–13, we have demonstrated an ability to extract patterns reflecting combinations of emotional components

Read more

Summary

Introduction

When both digital broadcast streams and the content itself are adapted to the small screen size of handheld devices, it will literally translate into hundreds of channels featuring rapidly changing mobisodes and location-aware media, where it might no longer be feasible to select programs by scrolling through an electronic program guide. We have previously in a related paper [2] analyzed how especially atmosphere metadata describing emotions may facilitate identifying programs that might be perceived as similar even though they belong to different genre categories. In music it appears that despite the often idiosyncratic character of tags, defined by hundred thousands of users in social networks like last.fm, people tend to agree on the affective terms they attach to describe music [3, 4]. A mounting question might be: could we possibly apply machine learning techniques to extract emotional aspects associated with media in order to model our perception, and facilitate an affective categorization which goes beyond traditional divides of genres?

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call