Abstract

Decomposing a large scale problem into smaller subproblems is one of the approaches used to overcome the usual performance deterioration that occurs in EA because of the large dimensionality. To achieve a good performance with a decomposition approach, the dependent variables need to be grouped into the same subproblem. In this paper, the Hybrid Dependency Identification with Memetic Algorithm (HDIMA) model is proposed for large scale optimization problems. The Dependency Identification (DI) technique identifies the variables that must be grouped together to form the subproblems. These subproblems are then evolved using a Memetic Algorithm (MA). Before the end of the evolution process, the subproblems are then aggregated and optimized as a complete large scale problem. A newly designed test suite of problems has been used to evaluate the performance of HDIMA over different dimensions. The evaluation shows that HDIMA is competitive to other models in the literature in terms of both consuming less computational resources and better performance.KeywordsLarge Scale Problems OptimizationEvolutionary AlgorithmsMemetic AlgorithmsProblem DecompositionDependency Identification

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.