Abstract

The human face is a prominent part of the human body; one can easily guess the emotional state of a person using visual expressions like facial expressions. A camera can be used to extract the required features from the human face. These features can be used from different perspectives and this information can be used to roughly estimate the emotional state. This information sounds patterns that match the mood collected from the previously provided information. Manually searching for sound patterns from playlists is difficult and time-consuming. This eliminates the tedious and repetitive task of physically separating sound recordings or collecting them in separate recordings, allowing you to create the right playlists for your current emotion. Therefore, the proposed system automatically generates playlists based on the emotions and expressions of the human face, as the human face plays an important role in detecting and extracting emotions from the person. This paper focuses on the strategies available for recognizing human emotions to develop an emotion-based rhythmic sounds patterns player, and the methodologies used by existing rhythmic sound patterns players to distinguish emotions. It additionally gives brief thought regarding our system working, emotion classification, and song playlist generation. The methodology that is proposed in this research paper contains less computation with an accuracy between 90%-95%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call