Abstract
Bias detection in the computer vision field is a necessary task, to achieve fair models. These biases are usually due to undesirable correlations present in the data and learned by the model. Although explainability can be a way to gain insights into model behavior, reviewing explanations is not straightforward. This work proposes a methodology to analyze the model biases without using explainability. By doing so, we reduce the potential noise arising from explainability methods, and we minimize human noise during the analysis of explanations. The proposed methodology combines images of the original distribution with images of potential context biases and analyzes the effect produced in the model’s output. For this work, we first presented and released three new datasets generated by diffusion models. Next, we used the proposed methodology to analyze the context impact on the model’s prediction. Finally, we verified the reliability of the proposed methodology and the consistency of its results. We hope this tool will help practitioners to detect and mitigate potential biases, allowing them to obtain more reliable models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.