Abstract

ABSTRACTRecent human-automation interaction research has confused concepts of automation and autonomy and has critiqued theories of automation in human-systems in terms of aspects of autonomy. This situation has led to inappropriate expectation for design and misdirected criticism of design methods. The situation is not new and has origins in historical human factors research. I differentiate the concepts of automation and autonomy with a new framework of agents. The framework is complemented by observations on characteristics of automated vs. autonomous systems, identification of error and failure modes, and formulation of a matrix of design constraints dictating possible applications of each type of agent. Discussion is provided on levels of automation, which have also been criticized in the literature, along with coverage of types of autonomy. A definition of autonomy is mutated throughout the paper to a form with utility for engineering. In general, demands of automated agents on the human-task-environment system should be absent from design of autonomous agents and design of automated systems is always automation-centric despite best efforts at human-centred approaches. Key requirements of design for autonomy include: agent viability in a target context, agent self-governance in goal formulation and fulfilment of roles, and independence in defined tasks performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call