Abstract
Artificial intelligence ( AI ) is once again a topic of huge interest for computer scientists around the world. Whilst advances in the capability of machines are being made all around the world at an incredible rate, there is also increasing focus on the need for computerised systems to be able to explain their decisions, at least to some degree. It is also clear that data and knowledge in the real world are characterised by uncertainty. Fuzzy systems can provide decision support, which both handle uncertainty and have explicit representations of uncertain knowledge and inference processes. However, it is not yet clear how any decision support systems, including those featuring fuzzy methods, should be evaluated as to whether their use is permitted. This paper presents a conceptual framework of indistinguishability as the key component of the evaluation of computerised decision support systems. Case studies are presented in which it has been clearly demonstrated that human expert performance is less than perfect, together with techniques that may enable fuzzy systems to emulate human-level performance including variability. In conclusion, this paper argues for the need for “ fuzzy AI ” in two senses: ( i ) the need for fuzzy methodologies ( in the technical sense of Zadeh ʼ s fuzzy sets and systems ) as knowledge-based systems to represent and reason with uncertainty; and ( ii ) the need for fuzziness ( in the non-technical sense ) with an acceptance of imperfect performance in evaluating AI systems.
Highlights
Following several peaks and troughs, Artificial Intelligence (AI) is once again at the forefront of Computer Science research around the world
Whilst sub-symbolic approaches such as deep learning are currently the vogue, this paper argues for the need in specific contexts for knowledge-based approaches to AI, with explicit representation of and reasoning with uncertainty
This paper has made the case firstly for the need for fuzzy expert systems, as a useful component of a suite of tools necessary for explainable AI systems, and for the need for variation to be incorporated within such systems
Summary
Following several peaks and troughs, Artificial Intelligence (AI) is once again at the forefront of Computer Science research around the world. Whilst IBM have never published full details of the algorithms employed, Deep Blue featured a high-speed parallel implementation of an alphabeta search featuring a board evaluation algorithm [1] In this regard, the term ‘AI’ or even ‘machine intelligence’ can be disputed as an accurate description of what is essentially a brute-force search algorithm containing no real intelligence; Deep Blue beat a human at chess (arguably, the best in the world) in a task that requires human intelligence. Whilst sub-symbolic approaches such as deep learning are currently the vogue, this paper argues for the need in specific contexts for knowledge-based approaches to AI, with explicit representation of and reasoning with uncertainty. As part of this argument, the use of fuzzy techniques is advocated as one suitable approach that can deliver the necessary capabilities. Some possible future directions of research are outlined and the main conclusions are summarised
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have