Abstract

Trust is generally understood as a relationship in which an agent (the trustor) decides to depend on another agent’s (the trustee) foreseeable behaviour in order to fulfil his expectations. It is a fundamental aspect of social interactions, as it has economical, social, psychological and ethical implications, and as such it is a crucial topic in several research areas (Gambetta 1998; Taddeo 2009). In the past decade, research interest in trust-related concepts and phenomena has escalated following the advent of the information revolution (Floridi 2008). The use of Information and Communication Technologies, of Computer Mediated Communications (CMCs), and the development of artificial agents—such as SatNav systems, drones, and robotic companion—have provided unprecedented opportunities for social interactions in informational environments, involving human as well as artificial and hybrid agents (Ess 2010). In such scenario, one of the most problematic issues is represented by the emergence of etrust, that is, trust specifically developed in digital contexts and/or involving artificial agents. Like trust, e-trust too has a variety of heterogeneous implications, ranging from the effects on social interactions in digital environment to the behaviour of the involved agents, whether human or artificial (Taddeo 2009; Ess 2010). When e-trust is considered from a philosophical perspective, four problems seem to be more salient: (i) the identification of the fundamental and distinctive aspects of e-trust; (ii) the relation between trust and e-trust, that is, whether e-trust should be considered as an occurrence of trust on-line or as an independent phenomenon in itself; (iii) whether the environment of occurrence has any influence on the emergence of e-trust; and, finally, (iv) the extent to which artificial agents can be involved in an etrust relationship. Problems (i)–(iv) are usually addressed together (Johnson 1997; Nissenbaum 2001; Weckert 2005). The literature focuses on whether the characteristics of on-line environment and social interactions satisfy the minimal requirements for the emergence of trust. It is often argued that two conditions are necessary to this purpose: (a) the presence of a shared cultural and institutional background, and (b) certainty about the trustee’s identity. The debate on the possibility of e-trust leads then to two opposite views: some argue that conditions (a)–(b) cannot be met in the online environment (Pettit 1995; Seigman 2000; Nissenbaum 2001), and therefore that there cannot be e-trust; others argue that they can (Weckert 2005; Vries 2006; Papadopoulou 2007; Turilli et al. 2010) and hence that e-trust is possible. The analysis of (iv) depends upon the role attributed to the feelings and the psychological status of the agents involved in a trust relation. Those who deem feelings and emotion necessary for the occurrence of trust deny the possibility that trust (including the online variety) may ever be present when one of the two peers is an artificial agent (Jones 1996). The opposite thesis is defended by those who consider (e-)trust a phenomenon occurring independently from the emotional and psychological status of the involved agents (Taddeo 2010). M. Taddeo (&) L. Floridi University of Hertfodshire, Hatfield, UK e-mail: m.taddeo@herts.ac.uk

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call