Abstract

Support Vector Machine (SVM) is an efficient classification tool providing good accuracy and reliability. The primal–dual method is an interior-point method for SVM training with considerable scalability and accuracy. In this paper, an improved primal–dual method for SVM learning is proposed. The proposed primal–dual method offers faster convergence of the SVM learning core by up to 25%, which is made possible by reducing the number of iterations required for obtaining the optimal solution while maintaining its accuracy. We also propose a low-complexity pipelined very large scale integration (VLSI) architecture for implementing the improved primal–dual method both on field-programmable gate array (FPGA) and 65 nm application specific integrated circuit (ASIC) platforms. The computational complexity of the proposed VLSI architecture is independent of the size of the training data and the feature vector.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call