Abstract

We study the effects of replacing an algorithmic (automated) or human advisor with a different advisor over the course of 20 forecasting trials. Participants first completed 10 trials with one type of advisor (human/automated). For the following 10 trials, the participant was provided with a new advisor who was either the same type or different type as before (human/automated). Results show that automated advisors are trusted less following the issuance of bad advice if the advisor has replaced a human. Additionally, automated advisors that replaced humans were rated as issuing lower quality advice, and human advisors that replaced automated advisors were rated as providing better quality advice. Results are discussed in the context of contrast effects, human-machine communication, and human-automation trust.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.