Theory and algorithms are presented for the following smoothing problem. We are given n measurements of a real-valued function that have been altered by random errors caused by the deriving process. For a given integer k, some efficient algorithms are developed that approximate the data by minimizing the sum of strictly convex functions of the errors in such a way that the approximated values are made up of at most k monotonic sections. If $k = 1$, then the problem can be solved by a special strictly convex programming calculation. If $k > 1$, then there are $O({n^k})$ possible choices of the monotonic sections, so that it is impossible to test each one separately. A characterization theorem is derived that allows dynamic programming to be used for dividing the data into optimal disjoint sections of adjacent data, where each section requires a single monotonic calculation. It is remarkable that the theorem reduces the work for a global minimum to $O(n)$ monotonic calculations to subranges of data and $O(k{s^2})$ computer operations, where $s - 2$ is the number of sign changes in the sequence of the first divided differences of the data. Further, certain monotonicity properties of the extrema of best approximations with k and $k - 1$, and with k and $k - 2$ monotonic sections make the calculation quite efficient. A Fortran 77 program has been written and some numerical results illustrate the performance of the smoothing technique in a variety of data sets.
Read full abstract