Abstract
Trust is an expected certainty in order to transact confidently. However, how accurate is our decision-making in human-machine interaction? In this chapter, the present evidence from experimental conditions in which human interrogators used their judgement of what constitutes a satisfactory response trusting a hidden interlocutor was human when it was actually a machine. A simultaneous comparison Turing test is presented with conversation between a human judge and two hidden entities during Turing100 at Bletchley Park, UK. Results of post-test conversational analysis by the audience at Turing Education Day show more than 30% made the same identification errors as the Turing test judge. Trust is found to be misplaced in subjective certainty that could lead to susceptibility to deception in cyberspace.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.