Abstract

AbstractArtificial intelligence (AI) and machine learning (ML) technologies are considered to be the Holy Grail for the researchers across the world. The applications of AI and ML are proving disruptive across the global technological spectrum, and there is practically no area which has been left untouched by these technologies right from computer science to manufacturing, healthcare, insurance, credit ratings, cybersecurity, and many more. It would not be an exaggeration to say that it is the next big thing after the advent of the Internet and potentially holds a similar impact in touching the lives of human beings. Whilst most researchers using machine learning in research across diverse domains do not need to look beyond the model abstraction for their work, the need for understanding what is happening beneath the surface is sometimes necessary. This becomes especially important in the cases where the predictions are too good to be apparently true, and the researcher running the model is not sure about its validity as the logic for prediction is obscure. The process of feature engineering brings in more accuracy to predictions, but in the absence of intuitive background information regarding the features, the task gets more challenging. The scientific reasoning has been driven by logic through ages, and the scientist community remains sceptical of the results unless they can extract useful insights from the black box ML models. The paper applies five popular explainability algorithms being used by the research community to demystify the abstract nature of ML black box models and compare the relative clarity of the insights being provided individually by each from a practitioner’s perspective using the publicly available UCI wine quality dataset.KeywordsMachine learningBlack box modelsFeature engineeringInterpretabilityExplainability

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call