Image classification is the process of assigning a category/class to an image. It has gained much importance in the recent years because of its real-time applications in object tracking, medical imaging, image organizations for large datasets, image and video retrieval. For instance, in image retrieval, query image once classified to the correct category avoids the searching of similar images from the complete dataset. In the state of art approaches, the classification techniques are generally discussed for a single dataset having similar images such as Textures(Rock,trees, texture based images), Describable Texture dataset (clothing pattern), Oxford Dataset(building pattern), etc. Thus a common approach for classification of various types of images is lacking. This paper presents a common approach for the variety of datasets having different types of images. Four different types of dataset, Caltech-101(101 different categories of images eg. airplane, sunflower, bike, etc), ORL Face, Bangla Signature and Hindi Signature are used for testing the proposed classification approach. The proposed approach has three phases. Region of Interest(ROI) using SURF(Speed Up Robust Transform) Points is obtained in the first phase. Extraction of LBP(Local Binary Pattern) Features on ROI is done in the second phase. In the third phase clustering of LBP features are done with a new proposed approach as CFC(Clustering with Fixed Centers) to construct Bag of LBP Features. Through proposed CFC approach each image is annotated/tagged with a fixed Bag of Features to avoid the training of machine, again and again. SVM is used here for classification as it has been experimentally found to give the best performance when compared with Decision Tree, Random Forest, K Nearest Neighbor and Linear Method. The accuracy obtained for Caltech-101, ORL Face, and Signature(Bangla and Hindi) are 79.0%, 75.0%, 81.6% and 87.0% respectively. Thus the average accuracy obtained by the proposed approach is 81.7% in contrast to other state of art approaches having average accuracy as 64.15%, 76.47%, and 77.65%.