Abstract

AbstractThe increasing use of intelligent technologies, the development and implementation of machine learning systems in various spheres of life require explaining machine learning-based decisions in such systems. This need for interpretation leads to the increasing development of new methods for interpreting machine learning models and their more intense use in real systems. The paper reviews existing studies with applications of the interpretable machine learning (IML) methods in social sciences and summarizes results using bibliometric analysis. In total, seven research topics were described based on 210 papers. Moreover, the paper discusses the opportunities, limitations, and challenges of the interpretable machine learning approach in social science research.KeywordsExplainable AIMachine learningResearch designSocial sciences

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call