Abstract

Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including Axelrod’s prediction that cooperation produces the best outcomes for societies. In contrast, by dismissing the value of beliefs, predictions of behavior improved dramatically, but only in situations where beliefs were suppressed, unimportant, or in low risk, highly certain environments, e.g., enforced cooperation. Moreover, rational models lack supporting evidence for their mathematical predictions, impeding generalizations to artificial intelligence (AI). Moreover, rational models cannot scale to teams or systems, which is another flaw. However, the rational models fail in the presence of uncertainty or conflict, their fatal flaw. These shortcomings leave rational models ill-prepared to assist the technical revolution posed by autonomous human–machine teams (A-HMTs) or autonomous systems. For A-HMT teams, we have developed the interdependence theory of complementarity, largely overlooked because of the bewilderment interdependence causes in the laboratory. Where the rational model fails in the face of uncertainty or conflict, interdependence theory thrives. The best human science teams are fully interdependent; intelligence has been located in the interdependent interactions of teammates, and interdependence is quantum-like. We have reported in the past that, facing uncertainty, human debate exploits the interdependent bistable views of reality in tradeoffs seeking the best path forward. Explaining uncertain contexts, which no single agent can determine alone, necessitates that members of A-HMTs express their actions in causal terms, however imperfectly. Our purpose in this paper is to review our two newest discoveries here, both of which generalize and scale, first, following new theory to separate entropy production from structure and performance, and second, discovering that the informatics of vulnerability generated during competition propels evolution, invisible to the theories and practices of cooperation.

Highlights

  • As part of the background, we revisit issues previously identified regarding teams, organizations, and social systems in preparation for the dramatic arrival of autonomous human–machine teams (A-HMTs) in the military, science, and society

  • As Feynman [5] warned about classical computers attempting to model quantum systems, artificial intelligence (AI) should not be applied ad hoc; to operate A-HMTs, we must first have a viable theory of interdependence and a sufficient mathematical model of an autonomous human–machine system that can account for predictions and observations under uncertainty and conflict, as one of the first steps to achieve effectiveness, efficiency

  • Adding support for the formulation of our interdependence theory of complementarity, we have established interdependent effects fundamental to a theory of autonomy: Optimum teams operate at maximum interdependence [26]; employee redundancy impedes teams and increases corruption [23]; team intelligence is critical to producing maximum entropy (MEP, in [32]; e.g., we have found that the search to develop patents in the Middle East North African countries, including Israel, depends on the average level of education in a nation; in [23]); and for the public debates offered to persuade a majority in support of an action [21]

Read more

Summary

Introduction

As part of the background, we revisit issues previously identified regarding teams, organizations, and social systems in preparation for the dramatic arrival of autonomous human–machine teams (A-HMTs) in the military (e.g., hypersonic missiles), science (e.g., transportation systems; medical systems), and society (for a review, see reference [1]). The 2018 US National Defense Strategy addressed the challenges faced by competition with nuclear powers China and Russia, rogue states Iran and North Korea, a weakening international order, and technological changes offering faster-than-human decisions ([6,7]; see the Rand report in [8]) To confront these challenges, among its options, the US Department of Defense (DoD) wants to develop artificial intelligence (AI), autonomy, and robots. Despite its flaws (e.g., reproducibility; in [17]), the rational cognitive model, based on individual consistency, reigns in social science, AI, and military research (e.g., a combat pilot’s Observe-Orient-Decision-Action, or OODA loop; in [18]) It continues to promote cooperation; e.g., in their review of social interdependence theory, Hare and Woods [19] posit anti-Darwinian and anti-competition views in their fulsome support of cooperation, but they are unable to generalize, to scale, or to predict outcomes in the face of conflict or uncertainty

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.