The sign language recognition system recently has drawn the attention of various researchers as there is no universal sign language, moreover, it consists of many patterns and postures. Many methods for extracting features and classifying sign language have been proposed in the literature, most of them are based on machine learning techniques. In this article, a deep learning method has been adopted by designing a Convolution Neural Network (CNN) model to extract the sign language features where for classification softmax layer is used. All alphabets in simple as well as complex backgrounds have been considered, where data is collected from 100 subjects in different lighting conditions. The effect of various optimization techniques (Adam, Sgdm, RMSProp), activation functions (ReLU and Leaky ReLU) for generalization ability is also observed. The proposed approach has succeeded in attaining the testing accuracy of 99.10%, 92.69%, and 95.95% on the Indian sign language dataset with simple, complex backgrounds and on a mixed background scenarios, respectively. The model is also tested on NUS dataset-I, NUS dataset-II, their combination, and achieved the accuracy of 100%, 95.95%, and 97.22% respectively.