Abstract

Tabu search is a global search algorithm which is popular in recent years [F. Glover, 1989, 1990, 1997]. The main principle of tabu search is that it has some memory of the states that has already been investigated and it does not revisit those states. It considers the set of all possible neighbor states and takes the best one, but it will also take the best move in the neighborhood which might be worse than the current move. The tabu search focuses greatly on expanding the global search area and avoiding the search of the same area. It can always get much better global solutions. The tabu search uses a tabu list to memorize the visited states and keep from recurrent search. Aspiration criterion is set to activate the “tabued state” in the tabu list around which some good global states may be found [D. Cvijovic, 1995]. In the past decade, there has been a growing interest in applying neural network to many areas of science and engineering, such as pattern identification and image management [Jianming Lu, 2007], control[Jian-Xin Xu, 2007] and optimize[Z. S. H. Chan, 2005], communication [Kung S Y, et al. 1998] and so on. Basically, neural network is the computing system characterized by the ability to learn from examples rather than having to be programmed in a conventional way as used in control engineering [K.J. Astrom, B, 1989]. The broad use of neural network in many areas derived from its ability of approximating nonlinear functions. In theory, it has been proved that a three-layered neural network can approximate unknown functions to any degree of desired accuracy [K. Funahashi.1989; and K. Hornik, 1989]. This chapter is focused on the tabu learning algorithm based on neural network in which an unknown function is approximated. The input of the network is given by the values of the function variables and the output is the estimation of the function. In mathematical terms, the objective is to find appropriate values for the weights of the net which approximate the function best. Gradient-based algorithms, especially the back propagation (BP) algorithm [L.M. Salchenberger, 1992] [P. Werbos, 1993] and its revised version [R. Parisi, 1996; and G. Zhou, 1998], are well known as a type of supervised learning for multilayered neural networks. The method of gradient descent is that a maximal downhill movement will eventually reach the minimum of the function surface over its parameter space by moving to the direction of the negative gradient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.