Abstract

Supervised learning is one of the activities in data mining that aims to classify or predict data. One of the powerful supervised learning algorithms is the Support Vector Machine which is included in the linear classifier. In data prediction activities, efforts are needed to improve the accuracy of predictions by optimizing parameters in the classification algorithm. In this study, the proposed Fractional Gradient Descent as an unconstraint optimization algorithm on objective functions in the SVM classifier. With Fractional Gradient Descent as an optimizer classification model in training data activities to progress the exactness of prediction models. Fractional Gradient Descent optimizes the SVM classification model using fractional values so that it has small steps with a small learning rate in the process of reaching global minimums, and achieving convergence with lower iterations. With a learning rate of 0.0001 SVM Classifier with fractional gradient descent have error rate = 0.273083, at learning rate 0.001 with error rate = 0.273070, and at learning rate 0.01 with error rate = 0.273134. The results of the SVM Classifier with stochastic gradient descent optimization reach the convergence point at iteration 350. With fractional gradient descent optimization, it reaches a convergence point of 50 iterations smaller than the SVM Classifier with stochastic gradient descent.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.