Abstract

Evaluation metrics play an important role in accessing the performance of a regression method. In practice, these multiple evaluation metrics can be used in two ways. The first way defines a loss function by aggregating multiple metrics, while the second way defines a multiobjective loss function by considering each metric as an objective function. In this paper, we propose a new way to use multiple evaluation metrics, which is different from the aggregating method and the mutliobjective method. Our method is based on genetic programming. The idea is to randomly use one metric in each iteration of the selection operator. Therefore, multiple metrics can be used alternatively in the running process. To validate the effectiveness of our new approach, we conduct experiments on ten benchmark datasets. The experimental results show that the new approach can improve the population diversity, and can achieve the performance better than or similar to that of the traditional symbolic regression algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.