Abstract

A goal of interactive machine learning (IML) is to create robots or intelligent agents that can be easily taught how to perform tasks by individuals with no specialized training. To achieve that goal, researchers and designers must understand how certain design decisions impact the human’s experience of teaching the agent, such as influencing the agent’s perceived intelligence. We posit that the type of feedback a robot can learn from affects the perceived intelligence of the robot, similar to its physical appearance. This study investigated two methods of natural language instruction: critique and action advice. We conducted a human-in-the-loop experiment in which people trained two agents with different teaching methods but, unknown to each participant, the same underlying machine learning algorithm. The results show an agent that learns from binary good/bad critique is perceived as less intelligent than an agent that can learn from action instructions, even if the underlying machine learning agent is the same. In addition to the complexity of the input, other design characteristics we found that influence the agent’s perceived intelligence are: compliance, responsiveness, effort, transparency, and robustness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.