Abstract

As humanity grapples with the concept of autonomy for human–machine teams (A-HMTs), unresolved is the necessity for the control of autonomy that instills trust. For non-autonomous systems in states with a high degree of certainty, rational approaches exist to solve, model or control stable interactions; e.g., game theory, scale-free network theory, multi-agent systems, drone swarms. As an example, guided by artificial intelligence (AI, including machine learning, ML) or by human operators, swarms of drones have made spectacular gains in applications too numerous to list (e.g., crop management; mapping, surveillance and fire-fighting systems; weapon systems). But under states of uncertainty or where conflict exists, rational models fail, exactly where interdependence theory thrives. Large, coupled physical or information systems can also experience synergism or dysergism from interdependence. Synergistically, the best human teams are not only highly interdependent, but they also exploit interdependence to reduce uncertainty, the focus of this work-in-progress and roadmap. We have long argued that interdependence is fundamental to human autonomy in teams. But for A-HMTs, no mathematics exists to build from rational theory or social science for their design nor safe or effective operation, a severe weakness. Compared to the rational and traditional social theory, we hope to advance interdependence theory first by mapping similarities between quantum theory and our prior findings; e.g., to maintain interdependence, we previously established that boundaries reduce dysergic effects to allow teams to function (akin to blocking interference to prevent quantum decoherence). Second, we extend our prior findings with case studies to predict with interdependence theory that as uncertainty increases in non-factorable situations for humans, the duality in two-sided beliefs serves debaters who explore alternatives with tradeoffs in the search for the best path going forward. Third, applied to autonomous teams, we conclude that a machine in an A-HMT must be able to express itself to its human teammates in causal language however imperfectly.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.