<p><span>Deep multi-task learning is one of the most challenging research topics widely explored in the field of recognition of facial expression. Most deep learning models rely on the class labels details by eliminating the local information of the sample data which deteriorates the performance of the recognition system. This paper proposes multi-feature-based deep convolutional neural networks (D-CNN) that identify the facial expression of the human face. To enhance the accuracy of recognition systems, the multi-feature learning model is employed in this study. The input images are preprocessed and enhanced via three filtering methods i.e., Gaussian, Wiener, and adaptive mean filtering. The preprocessed image is then segmented using a face detection algorithm. The detected face is further applied with local binary pattern (LBP) that extracts the facial points of each facial expression. These are then fed into the D-CNN that effectively recognizes the facial expression using the features of facial points. The proposed D-CNN is implemented, and the results are compared to the existing support vector machine (SVM). The analysis of deep features helps to extract the local information from the data without incurring a higher computational effort.</span></p>