Abstract

With the widespread adoption of object detection models across various industries, the interpretability of these detectors has become an important research topic. The interpretability of a detector helps humans understand which areas significantly contribute to the model's decision. Furthermore, the interpretability enhances the credibility of detectors and helps identify their strengths and weaknesses. Due to the ability to provide intuitive explanations of models, the saliency map has been widely employed in the field of interpreting deep models. Model-agnostic interpretability methods are more general approaches as they treat the model as a black box without considering its internal complexity structure. However, existing model-agnostic interpretability methods often introduce “noise” into saliency maps by applying random masking and fixed masking granularity. This noise reduces the quality and interpretability of the generated saliency maps. To address this challenge and obtain more interpretable saliency maps for object detection models, this paper proposes a model-agnostic progressive saliency map generation method based on a hierarchical framework called MAPSM. In MAPSM, an adaptive masking partition mechanism is introduced to adapt the masking granularity to different object sizes. Additionally, MAPSM employs a saliency-driven mask generation strategy to effectively reduce the “noise”. Utilizing a hierarchical framework, MAPSM progressively discovers and refines the saliency areas of objects, resulting in more interpretable saliency maps. To evaluate the quality of the saliency maps generated by MAPSM, we compare it with other methods in multiple metrics. Experimental results demonstrate that our method produces saliency maps with better quality and interpretability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call