Abstract

Any learned artificial neural network on a given set of observations represents a function of several variables with vector values or real values. However, in general it is unknown except very simply cases and we have trouble to tell anything about its properties behind very general results received from learning data. In applications, such as medicine, it needs to say not only that on training data we get some error, we have to know that an error is not greater than some ɛ for all data for which we consider the system.It is well known that the learned neural network define a function. However, we check correctness of it only on finite number of observed data. We develop an optimal control approach allowing to find approximation of an unknown function realizing the given observable data, parametrized by a set of controls and defined as ordinary differential equations. Moreover, to measure discrepancy, of the output of the network we define a functional into which we include probability distribution function estimating distribution of the data.We develop a dual dynamic programming ideas to formulate a new optimization problem. We apply it to derive and to prove sufficient approximate optimality conditions for approximate neural network which should work correctly for given ɛ with respect to built functional, on a data different than the set of observations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call