Abstract
Contemporary systems situated in real-world open environments frequently have to cope with incomplete and inconsistent information that typically increases complexity of reasoning and decision processes. Realistic modeling of such informationally complex environments calls for nuanced tools. In particular, incomplete and inconsistent information should neither trivialize nor stop both reasoning or planning. The paper introduces ACTLOG, a rule-based four-valued language designed to specify actions in a paraconsistent and paracomplete manner. ACTLOG is an extension of 4QLBel, a language for reasoning with paraconsistent belief bases. Each belief base stores multiple world representations. In this context, ACTLOG’s action may be seen as a belief bases’ transformer. In contrast to other approaches, ACTLOG actions can be executed even when the underlying belief base contents is inconsistent and/or partial. ACTLOG provides a nuanced action specification tools, allowing for subtle interplay among various forms of nonmonotonic, paraconsistent, paracomplete and doxastic reasoning methods applicable in informationally complex environments. Despite its rich modeling possibilities, it remains tractable. ACTLOG permits for composite actions by using sequential and parallel compositions as well as conditional specifications. The framework is illustrated on a decontamination case study known from the literature.
Highlights
Supported by the Polish National Science Centre grant 2015/19/B/ST6/02589, the ELLIIT Network Organization for Information and Communication Technology, and the Swedish Foundation for Strategic Research FSR (SymbiKBot Project)
A belief base can consist of three 3i-worlds: the first one containing facts based on measurements received from a ground robot’s sensor platform, the second one containing facts extracted from a drone’s camera video stream and the third one representing views provided by ground operators
For any ACTLOG specification of an action act(x), by #D we denote the sum of sizes of all domains in the specification, #L stands for the sum of lengths of composite actions’ specifications and by #M we denote the number of 4QLBel modules occurring in the specification
Summary
Reasoning about actions and change is an essential ingredient of AI systems. Throughout the years a variety of advanced solutions has been introduced, developed, verified and used in this field. When building applications dealing with pervasive information gaps and gluts, it is crucial to design knowledge completion and disambiguation in accordance with the recognized needs and the requirements of the application in question Along these lines, action specification languages call for nuanced but possibly simple and uniform tools supporting a rich repertoire of related techniques. Since the inception of knowledge representation and planning, beliefs have usually been modeled via various combinations of multi-modal logics [15, 20], nonmonotonic logics [40], probabilistic reasoning [54] or fuzzy reasoning [58], just to mention some of them Most of those approaches either lack tools for handling 3i or are too complex for real-world applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.