Abstract
The gradient descent method has the characteristics of easy realization and simple structure. The traditional gradient descent method has many advantages, especially in solving convex optimization problems. In recent years, some researchers have noticed that the gradient descent algorithm is helpful to solve the problem of underdetermined linear regression optimization. Therefore, in order to explore the specific relationship between gradient descent and under-determined linear regression optimization, this paper focuses on a case with a unique finite root loss function and discusses in detail the working principles of the natural-gradient-descent, mirror-descent, and gradient-descent in this case, and proves the auxiliary effect of gradient descent on under-determined linear regression optimization theoretically. How to use the characteristics of the gradient descent algorithm to design a suitable model, and how do we use the idea of the gradient descent algorithm to find an answer for problem of under determined linear regression optimization in the practical application will be the future research work to continue to advance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.