The aim of this paper is to analyze the different pitch database of Tamil language with feature fusion and SVM classification techniques for audio signals to identify human emotional states. One of the major bottlenecks of common speech emotion recognition techniques is tonal difference in speech. Consequently, to ease this challenge this paper aims to analyze the tonal accuracy with three set of databases. The proposed model uses feature fusion technique to achieve high accuracy with minimal dataset in allpitches variations. SVM features selection used for classification. Support Vector Machine are adopted to identify the six emotional states of anger, disgust, fear, happiness, sadness and neutral. By analyzing the data of different database of tamil emotional speech, normal pitch dataset yields high recognition rates among other databases. The results show that 94%, 89% and 87% accuracy in normal, low and high pitch database. Overall, by using feature fusion and SVM technique yields 92% of accuracy with combined database.