Abstract
Many optimization procedures presume the availability of an initial approximation in the neighborhood of a local or global optimum. Unfortunately, finding a set of good starting conditions is itself a nontrivial proposition. We describe a procedure for identifying approximate solutions to constrained optimization problems. Recurrent neural network structures are interpreted in the context of linear associative memory matrices. A recurrent associative memory (RAM) is trained to map the inputs of closely related transportation linear programs to optimal solution vectors. The procedure performs well when training cases are selected according to a simple rule, identifying good heuristic solutions for representative test cases. Modest infeasibilities exist in some of these estimated solutions, but the basic variables associated with true optimums are usually apparent. In the great majority of cases, rounding identifies the true optimum.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have