Abstract

With the growing convergence of artificial intelligence and daily life scenarios, the application scenarios for intelligent decision methods are becoming increasingly complex. The development of various machine learning algorithms has benefited all disciplines of study, but choosing which algorithm is most suitable for a certain problem among a large number of algorithms is a challenge that every field must overcome. Another challenge at the practical application level is that machine learning algorithms currently trained with large amounts of data are primarily black-box and uninterpretable. This indicates that these methods pose potential risks and are difficult to rely on, thus hindering their application in sensitive fields such as finance and healthcare. The first challenge can be overcome by using meta-learning to combine data and prior knowledge to efficiently and automatically select the machine learning models. The second challenge remains to be addressed due to the lack of interpretability of traditional meta-learning techniques and deficiencies in transparency and fairness. Achieving the interpretability of meta-learning in autonomous algorithm selection for classification is crucial to balance the need for high accuracy and transparency of machine learning models in practical application scenarios. This paper proposes EFFECT, an interpretable meta-learning framework that can explain the recommendation results of meta-learning algorithm selection and provide a more complete and accurate explanation of the recommendation algorithm’s performance on specific datasets combined with business scenarios. Extensive experiments have demonstrated the validity and correctness of this framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call