Abstract

Some analytical properties of Collaborative Optimization (CO) were demonstrated to lead to convergence difficulties due to vanishing Jacobian of the discrepancy constraints. This paper continues to analyze the approach, trying to improve it as well as to provide more insights of it. Vanishing discrepancy-constraint gradients can also cause analytical derivation of CO solutions to fail. Post-optimality sensitivity analysis specifically formulated for CO may be inadequate for providing system-level optimization with analytical gradients of the discrepancy constraints. From the penalty-function viewpoint, CO with equality constraints on discrepancies essentially is the case of imposing too high penalty at the beginning of iteration. Five improvement possibilities, each of a different character, are presented. These improvement possibilities can greatly improve the worst and the best of approximate solutions starting from different points. One improvement even can reduce the worst error, relative to the exact solution, from 302.56% to 1.51% .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call