A large portion of the hearing and speech impaired are deprived of any education and just around 1-2% of the hearing and speech impaired get schooling in Indian sign language. The large portion of them is incapable of using theoretical languages, having difficult issues to express themselves or understanding printed texts. So, there is need to build an auto- mated translation system with innovations for helping hearing and speech impaired people in communicating among them. Sign language recognition is an evolving research area in computer vision and machine learning. In this paper, recognition of Indian sign language alphabets will be carried out using deep learning and machine learning. A large database of images of sign language alphabets will be captured through web camera. The network is train by providing input and expected outputs. The project will explore the various deep learning algorithms such as AlexNet and GoogLeNet will be used for training the images. The trained model using various architectures will be used for testing the other Indian sign language alphabets. The comparative study of accuracy of various architecture will be carried out.