Abstract

This paper presents a novel framework for human-agent teaming grounded in the principles of Reinforcement Learning (RL). Recognizing the need for a unified language across various disciplines, we utilize RL concepts to provide a standard for the understanding and evaluation of diverse teaming strategies. Our framework extends beyond traditional RL constructs, integrating aspects such as belief states, prior knowledge, social considerations, situational awareness, and mental models. A particular focus is placed on the role of ethics and trust in effective teaming. Additionally, we discuss how sensor data, perception models, and actuator modules can be incorporated, emphasizing the adaptability of our framework to a broad range of tasks and environments. We believe this work forms a substantial contribution to the field of human-agent teaming, establishing a solid foundation for future research and application.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.