Abstract
Many optimization procedures presume the availability of an initial approximation in the neighborhood of a local or global optimum. Unfortunately, finding a set of good starting conditions is itself a nontrivial proposition. Our previous papers [1,2] describe procedures that use simple and recurrent associative memories to identify approximately solutions to closely related linear programs. In this paper, we compare the performance of a recurrent associative memory to that of a feed-forward neural network trained with the same data. The neural network's performance is much less promising than that of the associative memory. Modest infeasibilities exist in the estimated solutions provided by the associative memory, but the basic variables defining the optimal solutions to the linear programs are readily apparent.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have