Abstract

In relaxation subgradient minimization methods, a descent direction, which is based on the subgradients obtained at the iteration, forms an obtuse angle with all subgradients in the neighborhood of the current minimum. Minimization along this direction enables us to go beyond this neighborhood and avoid method looping. To find the descent direction, we formulate a problem in a form of systems of inequalities and propose an algorithm with space extension close to the iterative least squares method for solving them. The convergence rate of the method is proportional to the valid value of the space extension parameter and limited by the characteristics of subgradient sets. Theoretical analysis of the learning algorithm with space extension enabled us to identify the components of the algorithm and alter them to use increased values of the extension parameter if possible. On this basis, we propose and substantiate a new learning method with space extension and corresponding subgradient method for nonsmooth minimization. Our computational experiment confirms their efficiency. Our approach can be used to develop new algorithms with space extension for relaxation subgradient minimization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.