Abstract

In critical situations involving discrimination, gender inequality, economic damage, and even the possibility of casualties, machine learning models must be able to provide clear interpretations of their decisions. Otherwise, their obscure decision-making processes can lead to socioethical issues as they interfere with people’s lives. Random forest algorithms excel in the aforementioned sectors, where their ability to explain themselves is an obvious requirement. In this paper, we present LionForests, which relies on a preliminary work of ours. LionForests is a random forest-specific interpretation technique that provides rules as explanations. It applies to binary classification tasks up to multi-class classification and regression tasks, while a stable theoretical background supports it. A time and scalability analysis suggests that LionForests is much faster than our preliminary work and is also applicable to large datasets. Experimentation, including a comparison with state-of-the-art techniques, demonstrate the efficacy of our contribution. LionForests outperformed the other techniques in terms of precision, variance, and response time, but fell short in terms of rule length and coverage. Finally, we highlight conclusiveness, a unique property of LionForests that provides interpretation validity and distinguishes it from previous techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call