Abstract

Tomorrow’s “professional workshop” is more likely than not to rely heavily on automated systems, whether they are conceived as mere decision-aids or whether they actually replace professionals in some or most of the tasks they accomplish. Can such systems be designed in such a way as to improve the situational awareness that conditions a professional’s meeting her ethical responsibility ? This paper’s endeavour to answer that question leads to two distinct theses.First, this paper demarcates the professions from other types of expert service providers. In doing so, it makes a keyclaim: of those expert services whose safe delivery is in the public interest, healthcare, legal / financial services and education stand out because they give rise to a very particular type of vulnerability, one that potentially threatens the moral equality of those seeking those services (the particular case of those shaping the architecture conditioning our virtual interactions will also be discussed). The first section of this paper delineates the ethical demands stemming from this vulnerability, which is contrasted to the kind of vulnerability that characterises all lay-expert relationships given their inherent knowledge asymmetry. This paper argues that the specific type of responsibility entailed by the former can and should ground an understanding of the “professions” that has been (re)defined around it (and is hence restricted in scope).This paper’s second thesis claims that the success criterion for emerging uses of artificial intelligence in the professions should not just be whether they improve the affordability, quality and accountability of the professions’ services (as the Susskinds’ recent The future of the professions suggests). On those three counts, a lot of computer systems are likely to be successful. Yet it will be nothing short of a catastrophe if such systems fail to assist professionals (whom they will be working alongside with) in meeting their particular ethical responsibility. Aside from reducing a professional’s cognitive load, computer systems can be designed in such a way as to challenge routine perceptions and modes of thought, hence improving situational (and ethical) awareness. Yet the instrumental rationality that overwhelmingly presides over computer scientists’ lightning speed progress makes such design choices unlikely. This paper aims to foster much-needed public engagement with the design and shape of automated systems within the professions by introducing key technical concepts and outlining both the challenges and potential uses of such systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call