Abstract

The problem of minimizing the sum, or composition, of two objective functions is a frequent sight in the field of optimization. In this article, we are interested in studying relations between the discrete-time gradient descent algorithms used for optimization of such functions and their corresponding gradient flow dynamics, when one of the functions is in particular time-dependent. It is seen that the subgradient of the underlying convex function results in differential inclusions with time-varying maximal monotone operator. We describe an algorithm for discretization of such systems which is suitable for numerical implementation. Using appropriate tools from convex and functional analysis, we study the convergence with respect to the size of the sampling interval. As an application, we study how the discretization algorithm relates to gradient descent algorithms used for constrained optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call