Abstract

To improve the global convergence speed of social cognitive optimization (SCO) algorithm, a hybrid social cognitive optimization (HSCO) algorithm based on elitist strategy and chaotic optimization is proposed to solve constrained nonlinear programming problems (NLPs). The proposed algorithm partitions learning agents into three groups in proportion: elite learning agents, chaotic learning agents and common learning agents. The common learning agents work in the search way of traditional SCO, chaotic learning agents search via chaotic search (CS) algorithm based on tent map which helps to avoid the premature convergence, elite learning agents search via elitist selection which helps to improve the global searching performance. Additionally, a chaotic search process is incorporated into local searching operation so as to enhance the local searching efficiency in the neighboring areas of the feasible solutions. Simulation results on a set of benchmark problems show that the proposed algorithm has high optimization efficiency, good global performance, and stable optimization outcomes for constrained NLPs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.