Abstract

In human supervisory control, machines control the process according to directives given by human operators, where the humans have the final authority over the machines or automation. There are many human–machine systems in our society that are modeled nicely by the human supervisory control model. A glass-cockpit passenger aircraft is a typical example of such a system. However, even in transportation, human–automation relation for technologically advanced automobile may be quite different from that for a glass-cockpit aircraft. In automobile, it is only the automation that may be able to help the driver by backing up or replacing him/her to assure safety: Neither air traffic controller-like function nor copilot-like colleague is available there (Inagaki 2010). An advanced driver assistance system (ADAS) is a machine to assist a human to drive a car in a dynamic environment. Some functions of ADAS may include: (a) perception enhancement that helps the driver to perceive the traffic environment around his/her vehicle, (b) arousing attention of the driver to encourage paying attention to potential risks around his/her vehicle, (c) setting off a warning to urge the driver to take a specific action, and (d) automatic safety control that is activated when the driver takes no action even after being warned or when the driver’s control action seems to be insufficient. The first two functions, (a) and (b), are to help the driver to recognize or understand the situation. Understanding of the current situation determines what action needs to be done (Hollnagel and Bye 2000). Once situation diagnostic decision is made, action selection is usually straightforward (Klein 1993). However, the driver may not be successful in action selection decision. Function (c) is to help the driver in such a circumstance. Any ADAS that uses only the three functions, (a)–(c), is completely compatible with the human-centered automation principle (Billings 1997) in which the human is assumed to have the final authority over the automation. Suppose an ADAS contains the forth function, (d). Then, the ADAS may not be fully compatible with the human-centered automation, because the automation can implement a safety control action without any human intervention. Should we ban such an ADAS just because it can implement an action that is not ordered by the driver? It is well known that highly automated machines sometimes bring negative effects, such as the out-of-the-loop performance problem, loss of situation awareness, complacency or overtrust, automation surprises. However, humans have limited capabilities, and they might fail to understand the situation, select a right action and implement it appropriately, especially when available time and information are quite limited. Today’s machine can sense and analyze a situation, decide what must be done, and implement control actions. Cannot such a smart machine help the humans in more a positive manner as a teammate to the humans? The systems in which humans and technology collaborate together to achieve a common goal are called joint cognitive systems (Hollnagel and Woods 2005), in which human–automation coagency is central for realizing sensible human–automation partnership. This special issue of Cognition Technology and Work intends to discuss human–automation coagency for collaborative control of automobile. The first topic is authority and responsibility. The first three papers argue the need of situationand context-dependent sharing of authority between the human and the automation, without assuming T. Inagaki (&) Department of Risk Engineering, University of Tsukuba, Tsukuba 305-8573, Japan e-mail: inagaki@risk.tsukuba.ac.jp

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call