Abstract

This study introduced the Inversed Bi-segmented Average Crossover (IBAX), a novel crossover operator that enhanced the offspring generation of the genetic algorithm (GA) for variable minimization and numerical optimization problems. An attempt to come up with a new mating scheme in generating new offspring under the crossover function through the novel IBAX operator has paved the way to a more efficient and optimized solution for variable minimization particularly on premature convergence problem using GA. A total of 597 records of student-respondents in the evaluation of the faculty instructional performance, represented by 30 variables, from the four State Universities and Colleges (SUC) in Caraga Region, Philippines were used as the dataset. The simulation results showed that the proposed modification on the Average Crossover (AX) of the genetic algorithm outperformed the genetic algorithm with the original AX operator. The GA with IBAX operator combined with rank-based selection function has removed 20 or 66.66% of the variables while 13 or 43.33% of the variables were removed when GA with AX operator and roulette wheel selection function was used.

Highlights

  • Data preprocessing [1,2,3] which is an imperative stride and considered to be one of the prime methods that is useful in data mining (DM), have led to the enhancement on the quality of data that positively contributes improvement to the precision and accuracy level as well as the mining efficiency of a prediction model [4, 5].Data reduction, as an important data preprocessing technique in DM, is achieved through the selection and removal of unnecessary attributes and or variables in the dataset [6]

  • Simulation result for genetic algorithm (GA) with AX operator and Roulette Wheel Selection (RWS) Function

  • The simulation on the genetic algorithm was done for ten generations utilizing the existing traditional average crossover and roulette wheel selection function

Read more

Summary

Introduction

As an important data preprocessing technique in DM, is achieved through the selection and removal of unnecessary attributes and or variables in the dataset [6]. It is well known that in some cases, reducing original training set or variables by selecting the most representative information is advisable, yet obtaining nearly the same result or data-driven output [7,8,9]. Minimizing the size of the dataset aids in increasing the ability of generalization properties of the model. Maximized accuracy through the reduced number of attributes [11, 6] and better understandability and interpretability of results are among the many benefits perceived in data reduction [12]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.