Human emotions play a vital role in overall well-being. With the advent of advance technologies growing interest has been observed in developing a multimodal emotion classification system that can accurately interpret human emotions. The article presents a comprehensive overview of a multimodal emotion classification system (Emo Fu-Sense) designed to capture the rich and nuanced nature of human emotions. Objective of Emo Fu-Sense is to integrates information from Electrocardiogram (ECG), Galvanic Skin Response (GSR), Electroencephalograph (EEG), respiration amplitude and body temperature to achieve holistic understanding of emotional states. To effectively extract information from multimodal data, designed system employs conventional methods with sophisticated machine learning algorithms including Long Short-Term Memory (LSTM), a variety of Recurrent Neural Network (RNN). Recommended solution extracts column wise features independently from different modalities based on the windowing operation. Finally, feature fusion and modality biasing were used to combine the information from different modalities. The proposed method has not only highlighted the limitations of unimodal system but has achieved a classification accuracy of 92.62 %, with an average F1-Score of 93 % and 9.2 % of Mean Absolute Error (MAE). Obtained results are better than existing state-of-the-art approaches. Evaluation of the multimodal emotion classification system was conducted on MAHNOB-HCI dataset, which encompasses a wide range of emotional expressions across various contexts and individuals. The integration of multiple modalities and advanced machine learning techniques enables a more comprehensive understanding of emotional states and highlights the significance of research and development in the field of affective computing.