Abstract

Evaluation metrics play an important role in accessing the performance of a regression method. In practice, these multiple evaluation metrics can be used in two ways. The first way defines a loss function by aggregating multiple metrics, while the second way defines a multiobjective loss function by considering each metric as an objective function. In this paper, we propose a new way to use multiple evaluation metrics, which is different from the aggregating method and the mutliobjective method. Our method is based on genetic programming. The idea is to randomly use one metric in each iteration of the selection operator. Therefore, multiple metrics can be used alternatively in the running process. To validate the effectiveness of our new approach, we conduct experiments on ten benchmark datasets. The experimental results show that the new approach can improve the population diversity, and can achieve the performance better than or similar to that of the traditional symbolic regression algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call