Abstract

Artificial intelligence (AI) receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate threat to all human existence. In academe, the focus of computer scientists is on developing systems that function, whereas philosophy scholars theorize about the implications of this functionality for human life. In the interface between technology and philosophy there is, however, one imperative aspect of AI yet to be articulated: how do intelligent systems make inferences? We use the overarching concept “Artificial Intelligent Behaviour” which would include both cognition/processing and judgment/behaviour. We argue that due to the complexity and opacity of artificial inference, one needs to initiate systematic empirical studies of artificial intelligent behavior similar to what has previously been done to study human cognition, judgment and decision making. This will provide valid knowledge, outside of what current computer science methods can offer, about the judgments and decisions made by intelligent systems. Moreover, outside academe—in the public as well as the private sector—expertise in epistemology, critical thinking and reasoning are crucial to ensure human oversight of the artificial intelligent judgments and decisions that are made, because only competent human insight into AI-inference processes will ensure accountability. Such insights require systematic studies of AI-behaviour founded on the natural sciences and philosophy, as well as the employment of methodologies from the cognitive and behavioral sciences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call