Abstract

Explainability is one of the most discussed AI topics in recent years, and with good reason. The complexity of systems powered by AI has increased to the level that humans are unable to understand how AI systems make decisions, which will prove an obstacle to AI adoption. Explanations behind AI’s decision-making processes can provide us with the trust we need. When AI is asked to show its work, this demanded explainability, usually including transparency, creates accountability for the developers of systems, further urging them to reconsider the models—improved AI-based systems are typically the result of developers’ reappraisal of the models. In addition, an AI system with explainable inference processes or results will make users more inclined to trust its recommendations, leading to increased usage and application. Explainability paints a promising future for AI indeed, where both developers and users will benefit from explainable and trustworthy AI-based systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call