Abstract
This experiment integrates a particle filter concept with a gradient descent optimizer to reduce loss during iteration and obtains a particle filter-based gradient descent (PF-GD) optimizer that can determine the global minimum with excellent performance. Four functions are applied to test optimizer deployment to verify the PF-GD method. Additionally, the Modified National Institute of Standards and Technology (MNIST) database is used to test the PF-GD method by implementing a logistic regression learning algorithm. The experimental results obtained with the four functions illustrate that the PF-GD method performs much better than the conventional gradient descent optimizer, although it has some parameters that must be set before modeling. The results of implementing the MNIST dataset demonstrate that the cross-entropy of the PF-GD method exhibits a smaller decrease than that of the conventional gradient descent optimizer, resulting in higher accuracy of the PF-GD method. The PF-GD method provides the best accuracy for the training model, 97.00%, and the accuracy of evaluating the model with the test dataset is 90.37%, which is higher than the accuracy of 90.08% obtained with the conventional gradient descent optimizer.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.