Understanding others' intentions through nonverbal cues like facial emotions is crucial in human communication. To design and train Deep Learning Models, this paper describes in detail how Convolutional Neural Network Models are developed using tf. Keras. The aim is to Sort facial photos into one of the seven face detection classifiers, our model is developed in such a manner that it learns hidden nonlinearity from the entered facial images, which is vital for discriminating the form of emotion someone is expressing. The model proposed on the Lenet-5 architecture by Yann LeCun uses the subsampling, feature map, and activation function (ReLu) in between the convolutional layer and fully connected layer for the output soft-max activation function will be used. The FER-2013 dataset, which consists of 35,887 structured 48x48 pixel grayscale images, was used to train the CNN models. The training dataset has 28,709 elements, while the testing dataset has 3,589 elements, and while validation has 3,589 elements. Train and test are the two folder names used to organize the FER dataset. separated even further into distinct files, each holding a different kind of FER dataset class. To mitigate the overfitting of the dropout, batch normalization and the model are employed. Since this is a multiclass classification problem, we are utilizing the Soft-max activation function and the Rectified linear unit for non-linear operation (ReLu). We are training a categorical cross- entropy and matrix for accuracy based on the parameters to assess the constructed CNN model's performance by examining the training epoch history and we have used optimizer Adam (Adaptive Moment Estimation) with the learning rate of 0.0001. We obtained the accuracy of the LeNet-5 model on training at 95.49% and testing at 49.47% [13].