Abstract

Dimension reduction or feature selection is thought to be the backbone of big data applications in order to improve performance. Many scholars have shifted their attention in recent years to data science and analysis for real-time applications using big data integration. It takes a long time for humans to interact with big data. As a result, while handling high workload in a distributed system, it is necessary to make feature selection elastic and scalable. In this study, a survey of alternative optimizing techniques for feature selection are presented, as well as an analytical result analysis of their limits. This study contributes to the development of a method for improving the efficiency of feature selection in big complicated data sets.

Highlights

  • Categorization is an important job in data mining and machine learning that divides every example in a data collection into various groups based on the information provided by its attributes

  • Multi-objective optimization issues [3] are a type of optimization problem that arises in real-world applications and has many competing goals

  • The Pareto technique is used to develop a compromise solution that can be shown in the form of a Pareto optimal front (POF) end if the intended solutions and performance measurements are separate

Read more

Summary

INTRODUCTION

Categorization is an important job in data mining and machine learning that divides every example in a data collection into various groups based on the information provided by its attributes. The findings of published efforts in the area of enhancement suggested solving a problem using swarm intelligence and evolutionary approaches, that began as static optimization and reveal a pseudo environments-based technique that can resolve and enhance an integrated multi issue to decrease or improve the outcome features. Multi-objective optimization issues [3] are a type of optimization problem that arises in real-world applications and has many competing goals. The implementation of feature selection as a multi-objective optimization process can bring certain benefits whether the classification approach is supervised or unsupervised. A multi-objective optimization strategy that sufficiently combines classifiers performance and attribute quantity provides a reasonable formulation of this problem. Problems with many goals are referred to as multi-objective optimization (MOO). The multi-objective evolutionary algorithm is a stochastic optimization technique (MOEA).

LITERATURE REVIEW
RESEARCH GAPS
Limitations
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call