The neural mechanisms involved in the processing of vocalizations and music were compared, in order to observe possible similarities in the encoding of their emotional content. Positive and negative emotional vocalizations (e.g. laughing, crying) and violin musical stimuli digitally extracted from them were used as stimuli. They shared the melodic profile and main pitch/frequency characteristics. Participants listened to vocalizations or music while detecting rare auditory targets (bird tweeting, or piano's arpeggios). EEG was recorded from 128 sites. P2, N400 and Late positivity responses of ERPs were analysed. P2 peak was earlier in response to vocalizations, while P2 amplitude was larger to positive than negative stimuli. N400 was greater to negative than positive stimuli. LP was greater to vocalizations than music and to positive than negative stimuli. Source modelling using swLORETA suggested that, among N400 generators, the left middle temporal gyrus and the right uncus responded to both music and vocalizations, and more to negative than positive stimuli. The right parahippocampal region of the limbic lobe and the right cingulate cortex were active during music listening, while the left superior temporal cortex only responded to human vocalizations. Negative stimuli always activated the right middle temporal gyrus, whereas positively valenced stimuli always activated the inferior frontal cortex. The processing of emotional vocalizations and music seemed to involve common neural mechanisms. Notation obtained from acoustic signals showed how emotionally negative stimuli tended to be in Minor key, and positive stimuli in Major key, thus shedding some lights on the brain ability to understand music.