Abstract

In an era marked by the transformative impact of machine learning algorithms across various disciplines, challenges in achieving model interpretability persist. Existing evaluation datasets often lack transparency, thereby obscuring the decision-making process of machine learning models, particularly in complex deep learning architectures. This opacity raises concerns across sectors like healthcare, emphasizing the pivotal role of explainability in fostering trust and adhering to non-supervisory norms. While progress has been made through the development of interpretable models, the absence of formalized, interpretable datasets hampers the validation and comparison of techniques. Rule-based datasets, distinct from general synthetic datasets, provide an avenue to simulate real-world challenges while maintaining interpretability. This paper introduces FormulAI, a framework for generating comprehensive rule-grounded datasets encompassing categorical and continuous features, calibrated noise, and imbalanced class distribution. Emphasizing scalability and reproducibility, these datasets serve as a robust standard, fostering exploration in interpretability and robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call