Support vector machine (SVM) is one of the state-of-the-art tools for linear and nonlinear pattern classification. One of the design issues in SVM classifier is reducing the number of support vectors without compromising the classification accuracy. A technique denoted as diminishing learning (DL) is already proposed in literature for an SVM based multi-class isolated digit recognition system using speaker dependent TI46 database of isolated digits. In this paper, the computational complexity for SVM and SVM-DL based isolated digit recognition system is studied and the computation time for both the classifiers is evaluated by system-on-programmable-chip (SOPC) implementation of the recognition system onto an Altera Cyclone II Series FPGA using Nios II Soft-core processor. The number of support vectors is reduced by 38.28–90.25 % on using SVM-DL for isolated digit recognition problem. This in turn reduces the classification time for SVM-DL by 31.45–91.78 % over SVM. Recognition accuracies of 97 and 98 % are achieved for SVM classifier with and without DL technique, respectively. The study confirms the effect of, the order in which the classes are classified, on the recognition accuracy. For the TI46 database, about 2–4 % increase in recognition accuracy is obtained by choosing the optimum order for SVM-DL classifier. The proposed SOPC implementation of SVM-DL based recognition system can be employed for various other pattern recognition applications too such as face recognition, character recognition and target recognition.