Communication is a barrier between the deaf-mute community and the rest of the society. Sign Language is used for communication among such people who cannot speak and listen. The automation of sign language recognition has gained researchers attention in the last few years. Many complex and costly hardware systems have already been developed to assist the purpose. However, we propose to use deep learning approach for automated sign language recognition. We devised a novel 2-level ResNet50 based Deep Neural Network Architecture to classify fingerspelled words. The dataset used is the standard American Sign Language Hand gesture dataset by [1]. The dataset was first augmented using various augmentation techniques. In our 2-level ResNet50 based approach the Level 1 model classifies the input image into one of the 4 sets. After an image is classified into one of the sets it is provided as an input to the corresponding second level model for predicting the actual class of the image. Our approach yields an accuracy of 99.03% on 12,048 test images.