Adapting the profound, deep convolutional neural network models for large image classification can result in the layout of network architectures with a large number of learnable parameters and tuning of those varied parameters can considerably grow the complexity of the model. To address this problem a convolutional Deep-Net Model based on the extraction of random patches and enforcing depth-wise convolutions is proposed for training and classification of widely known benchmark Breast Cancer histopathology images. The classification result of these patches is aggregated using majority vote casting in deciding the final image classification type. It has been observed that the proposed Deep-Net model implementation results when compared with classification results of the VGG Net(16 layers) learned features, outclasses in terms of accuracy when applied to breast tumor Histopathology images. The objective of this work is to examine and comprehensively analyze the sub-class classification performance of the proposed model across all optical magnification frontiers.