Nonsmooth nonconvex optimization problems are pivotal in engineering practice due to the inherent nonsmooth and nonconvex characteristics of many real-world complex systems and models. The nonsmoothness and nonconvexity of the objective and constraint functions bring great challenges to the design and convergence analysis of the optimization algorithms. This paper presents a smooth gradient approximation neural network for such optimization problems, in which a smooth approximation technique with time-varying control parameter is introduced for handling nonsmooth nonregular objective functions. In addition, a hard comparator function is introduced to ensure that the state solution of the proposed neural network remains within the nonconvex inequality constraint sets. Any accumulation point of the state solution of the proposed neural network is proved to be a stationary point of the nonconvex optimization under consideration. Furthermore, the neural network demonstrates the ability to find optimal solutions for some generalized convex optimization problems. Compared with the related neural networks, the constructed neural network has weaker convergence conditions and simpler algorithm structure. Simulation results and an application in optimizing condition number verify the practical applicability of the presented algorithm.
Read full abstract