Abstract

Deep Learning achieves surprising performance in many real-world tasks. However, on a black-box approach, computational techniques have been applied without a strong critical understanding of their properties. In this paper, we review the current methodologies and techniques about improving the interpretability of Deep Learning from different research directions. Some works are based on analysis of the learning process, some lay more emphasis on interpreted network architecture, and others intend to design self-interpretable Deep Learning models. This article analyzes the popular and advanced works in these fields and provides a future look for Deep Learning researchers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call