Truncated loss functions are robust to class noise and outliers. A robust twin bounded support vector machine is proposed in this paper that truncates the growth of its loss functions at a pre-specified point, thus, flattens the function that pre-specified score afterwards. Moreover, to make the proposed method capable of handling datasets with different imbalance ratio, cost-sensitive learning is implemented by scaling the total error of the classes based on the number of samples of each class. However, the adopted loss functions take a non-convex structure which does not always assure global optimum. To handle this issue, we suggest concave-convex procedure (CCCP) to ensure global convergence by decomposing the cost functions into additions of one convex and one concave part. The behaviour of the proposed method for varied imbalance ratio is analysed experimentally and depicted graphically. Classification performance of the proposed method in terms of AUC, F-Measure and G-Mean is compared with other related methods, viz. Hinge loss support vector machine (SVM), Ramp loss SVM (RSVM), twin SVM (TWSVM), twin bounded SVM (TBSVM), Pinball loss SVM (pin-SVM), entropy-based fuzzy SVM (EFSVM), non-parallel hyperplane Universum SVM (U-NHSVM), stochastic gradient twin support vector machine (SGTSVM), k-nearest neighbor (KNN)-based maximum margin and minimum volume hyper-sphere machine (KNN-M3VHM) and affinity and class probability-based fuzzy SVM (ACFSVM) on several real-world datasets with imbalance ratio raging from low to high. Further, to establish the significance of the proposed method in pattern classification, pair-wise statistical comparison of the methods is performed based on their average ranks on AUC. Experimental results are convincing.