Biological systems are self-organizing, tolerant of manufacturing defects and they adapt, rather than being programmed, to their environments. The problems they solve involve the interaction of an organism/system with the real world. Bio-inspired computation is the use of computers to model nature, and simultaneously the study of natural biological systems to improve the usage of computers. It is a major subset of natural computation. Recently, bio-inspired computation, such as evolutionary computation, swarm intelligence bacterial foraging, cultural algorithms, neural networks, fuzzy systems, rough sets, molecular computing and other bionics, is becoming increasingly important in face of the complexity of today’s demanding applications. This special issue on bio-inspired computation is dedicated to the latest work in the theory and applications in this exciting area. Our aim is to provide a useful reference for understanding new trends in bio-inspired computation. After a detailed review process, eight papers were selected to reflect the thematic vision. The contents of these studies are briefly described as follows. The group search optimizer (GSO) was inspired by animal social searching behaviour. It has been shown that the global search performance of the GSO is competitive to other biologically inspired optimization algorithms. In the paper, ‘Analysis of premalignant pancreatic cancermass spectrometry data for biomarker selection using a group search optimizer’, S. He et al. apply a GSO as a feature selection method to mass spectrometry (MS) data analysis for premalignant pancreatic cancer biomarker discovery. After applying a smooth nonlinear energy operator (SNEO) to detect peaks, then a GSO with linear discriminant analysis (LDA) is used to select a parsimonious set of peak windows ((biomarkers) that can distinguish cancer. After selecting a set of biomarkers, a support vector machine (SVM) is then applied to build a classifier to diagnosis premalignant cancer cases. The GSO algorithm was compared with a genetic algorithm (GA), evolution strategies (ES), evolutionary programming (EP) and a particle swarm optimizer (PSO). The results showed that the GSO-based feature selection algorithm is capable of selecting a parsimonious set of biomarkers to achieve better classification performance than other algorithms. In the paper, ‘Viral system algorithm: foundations and comparison between selective and massive infections’, Pablo Cortes et al. present a guided and deep introduction to viral systems (VS), a novel bio-inspired methodology based on the natural biological process taking part when the organism has to give a response to an external infection. VS has proven to be very efficient when dealing with problems of high complexity. The paper discusses on the foundations of viral systems, presents the main pseudocodes that need to be implemented and illustrates the methodology application. A comparison between VS and other metaheuristics, as well between different VS approaches, is presented. Finally, trends and new research opportunities are presented for this bio-inspired methodology. Population-based heuristic optimization methods like differential evolution (DE) depend largely on the generation of the initial population. The initial population not only affects the search for several iterations but often also has an influence on the final solution. The conventional method for generating an initial population is the use of computergenerated pseudorandom numbers, which may not be very effective. In the paper entitled with ‘A simplex differential evolution algorithm: development and applications’, Musrrat Ali et al. have investigated the potential of generating the initial population by integrating the non-linear simplex method (NSM) of Nelder and Mead with pseudorandom numbers in a DE algorithm. The resulting algorithm named the non-linear simplex differential evolution (NSDE) is tested on a set of 20 benchmark problems with box constraints and two real-life problems. Numerical results show that the proposed scheme for generating the random numbers significantly improves the performance of DE in terms of fitness function value, convergence rate and average CPU time. In the paper, ‘How to design a powerful family of particle swarm optimizers in inverse modelling’, Juan Luis Fernandez Martinez et al. show how to design a powerful set of particle swarm optimizers for application in inverse modelling.
Read full abstract