Abstract

Today, AI is primarily narrow, meaning that each model or agent can only perform one task or a narrow range of tasks. However, systems with broad capabilities can be built by connecting multiple narrow AIs. Connecting various AI agents in an open, multi-organizational environment requires a new communication model. Here, we develop a multi-layered ontology-based communication framework. Ontology concepts provide semantic definitions for the agents’ inputs and outputs, enabling them to dynamically identify communication requirements and build processing pipelines. Critical is that the ontology concepts are stored on a decentralized storage medium, allowing fast reading and writing. The multi-layered design offers flexibility by dividing a monolithic ontology model into semantic layers, allowing for the optimization of read and write latencies. We investigate the impact of this optimization by benchmarking experiments on three decentralized storage mediums—IPFS, Tendermint Cosmos, and Hyperledger Fabric—across a wide range of configurations. The increased read-write speeds allow AI agents to communicate efficiently in a decentralized environment utilizing ontology principles, making it easier for AI to be used widely in various applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.