Abstract

Ethics in Artificial Intelligence (AI) is discussed everywhere, in governmental circles as well as scientific forums. It is very often associated with the two widely used concepts of diversity and eXplainable Artificial Intelligence (XAI). The latter was promoted by DARPA, and it is expected to propose methods preserving the rights of users to understand how the AI systems work and why decisions are made. Computational Intelligence methods have much to contribute to this effort to ensure that AI systems are ethical. In July 2020, the European Union has published an Assessment List for Trustworthy Artificial Intelligence (ALTAI) covering a wide range of ethical issues. The Computational Intelligence society (CIS) is at the heart of all these aspects of ethics in AI. They represent fascinating challenges for researchers and they can highlight the power of all Computational Intelligence methods: neural networks, learning methods, fuzzy systems and evolutionary computation. Efforts are already done through the IEEE CIS Task Force on Ethical and Social Implications of Computational Intelligence. Several other CIS Technical Committee task forces and special issues of CIS publications are also focused on XAI. In particular, a special issue of the IEEE Computational Intelligence Magazine dedicated to Explainable and Trustworthy Artificial Intelligence will be published in 2022. The IEEE CIS Cognitive and Developmental Systems Technical Committee and the IEEE Transactions on Cognitive and Developmental Systems also address some of these concerns on a cognitive basis. Moreover, the IEEE CIS is the oversight committee for the IEEE Consortium On The Landscape of AI Safety (CLAIS) and, on a more specific level, it participates in the IEEE Brain Community. I am convinced that the IEEE CIS needs to pay even more attention to these crucial issues.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call