There has been a wide interest in applying Deep Learning (DL) algorithms for automated binary and multi class classification of colour fundus images affected with Diabetic Retinopathy (DR). These algorithms have shown high sensitivity and specificity for detecting DR in non-clinical setup. Transfer learning has been successfully tested in many medical imaging applications like skin cancer detection, pulmonary nodule detection, Alzheimer’s disease etc. This paper experiments with the different DL architectures such as VGG19, InceptionV3, ResNet50, MobileNet and NASNet for automated DR classification (binary and multi class) on Messidor dataset. The dataset is publicly available, and comprises of 1200 retinal fundus images. The images belong to four different classes of DR namely, normal (class 0), mild (class 1), moderate (class 2) and severe (class 3), graded based on the severity level of DR. In our experiment, we have enhanced the quality of input images by applying algorithms like CLAHE (Contrast Limited Adaptive Histogram Equalisation) algorithm and Powerlaw transformation as pre-processing techniques, which work on the small image patches with high accuracy, contrast limiting and image sharpening. Hyper parameter tuning on pretrained InceptionV3 architecture, resulted in enhancing the accuracy of the model. Both binary and multi class results were analysed considering inter class (one class with another class) accuracies. We achieved an accuracy of 78% between class 0 and class 1, the accuracy between class 0 and class 2 further reduced to 69%, while class 1 and class 2 showed an accuracy of 61%. Moreover, the interclass class accuracy between class 1 and class 3 was 62%, class 2 and class 3 further reduced to 49%. The accuracy further diminished between class 0 and class 3 to 32%. These experiments suggest that the pretrained models provided better results in classifying normal and mild, but they were not that much efficient in classifying moderate-severe and normal-severe binary classifications.