Abstract

Autonomous ai agents raise the issue of semantic interoperability between independently architectured and differently embodied intelligences. This article offers an approach to the issue with certain aspects that are close in spirit to the way humans make out meanings. Using a mathematical model of cognition, it is shown how agents with autonomously developed conceptualizations can bootstrap and unravel each other's meanings ad hoc. The domain general methodology is based on the agents' capability to deal with Boolean operations, and on the shared outside environment. No prior provisions are required. The formalized cognitive process consists of constructing, and solving, Boolean equations that are grounded in the shared environment. The process yields a testable conjecture about the grounded conceptual representation of the other, along with a testable conjectured translation that maps from that representation to one's own.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.