Abstract

A smooth approximationp (x, α) to the plus function max{x, 0} is obtained by integrating the sigmoid function 1/(1 + e−αx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy forα sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finiteα. Speedup over MINOS 5.4 was as high as 1142 times for linear inequalities of size 2000 × 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCPs with as many as 10 000 variables, the proposed approach was as much as 63 times faster than Lemke's method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.