Abstract

Autonomous behaviors created by the research and development community are not being extensively utilized within energy, defense, security, or industrial contexts. This paper provides evidence that the interaction methods used alongside these behaviors may not provide a mental model that can be easily adopted or used by operators. Although autonomy has the potential to reduce overall workload, the use of robot behaviors often increased the complexity of the underlying interaction metaphor. This paper reports our development of new metaphors that support increased robot com- plexity without passing the complexity of the interaction onto the operator. Furthermore, we illustrate how recognition of problems in human-robot interactions can drive the creation of new metaphors for design and how human factors lessons in usability, human performance, and our social contract with technology have the potential for enormous payoff in terms of establishing effective, user-friendly robot systems when appropriate metaphors are used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call