Abstract
AbstractOne of big challenges of many state-of-the-art AI techniques such as deep learning is that their results do not come with any explanations – and, taking into account that some of the resulting conclusions and recommendations are far from optimal, it is difficult to distinguish good advice from bad one. It is therefore desirable to come up with explainable AI. In this paper, we argue that fuzzy techniques are a proper way to this explainability, and we also analyze which fuzzy techniques are most appropriate for this purpose. Interestingly, it turns out that the answer depends on what problem we are solving: e.g., different “and”- and “or”-operations are preferable when we are controlling a single object and when we are controlling a group of objects.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.