Articles published on Answer set programming
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
857 Search results
Sort by Recency
- New
- Research Article
1
- 10.1016/j.neunet.2025.108022
- Jan 1, 2026
- Neural networks : the official journal of the International Neural Network Society
- Rong Wang + 1 more
DSPy-based neural-symbolic pipeline to enhance spatial reasoning in LLMs.
- Research Article
- 10.3389/frai.2025.1614894
- Dec 10, 2025
- Frontiers in Artificial Intelligence
- Zhongtao Xie + 3 more
Circumscription is an important logic framework for representing and reasoning common-sense knowledge. With efficient implementations for circumscription, including circ2dlp and aspino, it has been widely used in model-based diagnosis and other domains. We propose a notion of minimal reduct for propositional circumscription and prove a characterization theorem, i.e., that the models of a circumscription can be obtained from the minimal reduct of the circumscription. With the help of the minimal reduct, a new method circ-reduct for computing models of circumscription is presented. It iteratively computes smaller models under set inclusion (if possible), and the minimal reduct is used to simplify the circumscription in each iteration. The algorithm is proved to be correct. Extensive experiments are conducted on circuit diagnosis ISCAS85, random CNF instances, and some industrial SAT instances for the international SAT competition. These results demonstrate that the minimal reduct is effective in computing circumscription models. Compared to the widely used circumscription solver circ2dlp using the state-of-the-art answer set programming solver clingo, our algorithm circ-reduct achieves significantly shorter CPU time. Compared with aspino using glucose as the internal SAT solver and unsatisfiable core analysis technique, our algorithm achieves better CPU time for random and industrial CNF benchmarks, while it is comparable for circuit diagnosis benchmarks.
- Research Article
- 10.1017/s1471068425100343
- Nov 3, 2025
- Theory and Practice of Logic Programming
- Manuel Alejandro Borroto Santana + 5 more
Abstract Large language models (LLMs) excel at understanding natural language but struggle with explicit commonsense reasoning. A recent trend of research suggests that the combination of LLM with robust symbolic reasoning systems can overcome this problem on story-based question answering (Q&A) tasks. In this setting, existing approaches typically depend on human expertise to manually craft the symbolic component. We argue, however, that this component can also be automatically learned from examples. In this work, we introduce LLM2LAS, a hybrid system that effectively combines the natural language understanding capabilities of LLMs, the rule induction power of the learning from answer sets (LAS) system ILASP, and the formal reasoning strengths of answer set programming (ASP). LLMs are used to extract semantic structures from text, which ILASP then transforms into interpretable logic rules. These rules allow an ASP solver to perform precise and consistent reasoning, enabling correct answers to previously unseen questions. Empirical results outline the strengths and weaknesses of our automatic approach for learning and reasoning in a story-based Q&A benchmark.
- Research Article
- 10.1186/s12859-025-06135-y
- Oct 7, 2025
- BMC Bioinformatics
- Laura Cifuentes-Fontanals + 2 more
BackgroundThe study of control mechanisms of biological systems allows for interesting applications in bioengineering and medicine, for instance in cell reprogramming or drug target identification. A control strategy often consists of a set of interventions that, by fixing the values of some components, ensure that the long term dynamics of the controlled system is in a desired state. A common approach to control in the Boolean framework consists in checking how the fixed values propagate through the network, to establish whether the effect of percolating the interventions is sufficient to induce the target state. Although methods based uniquely on value percolation allow for efficient computation, they can miss many control strategies. Exhaustive methods for control strategy identification, on the other hand, often entail high computational costs. In order to increase the number of control strategies identified while still benefiting from an efficient implementation, we introduce the use of trap spaces, subspaces of the state space that are closed with respect to the dynamics, and that can usually be easily computed in biological networks.ResultsThis work presents a method based on value percolation that uses trap spaces to uncover new control strategies. It allows for node interventions, which fix the value of certain components, and edge interventions, which fix the effect that one component has on another. The method is implemented using Answer Set Programming, extending an existing efficient implementation of value percolation to allow for the use of trap spaces and edge control. The applicability of the approach is studied for different control targets in a biological case study, identifying in all cases new control strategies.ConclusionThe method presented here provides a new tool for control strategy identification in Boolean networks that allows for more diversity of interventions and for the possibility of efficiently finding new control strategies that would escape usual percolation-based methods, widening the possibility for potential applications
- Research Article
- 10.1007/s10994-025-06876-0
- Sep 28, 2025
- Machine Learning
- Gioacchino Sterlicchio + 1 more
Abstract In this paper, we present MASS-CSP (Mining with Answer Set Solving - Contrast Sequential Patterns), a declarative approach to the Contrast Sequential Pattern Mining (CSPM) task, which is based on the logic-based framework of Answer Set Programming (ASP). The CSPM task focuses on identifying significant differences in frequent sequences relative to specific classes, leading to the concept of a contrast sequential pattern. The article describes how MASS-CSP addresses the CSPM task and related extensions-mining closed, maximal and constrained patterns. Evaluation aims at comparing the basic version of MASS-CSP against the extended versions as regards the size of output and time-memory requirements.
- Research Article
- 10.1017/s1471068425100306
- Sep 12, 2025
- Theory and Practice of Logic Programming
- Alice Tarzariol + 2 more
Abstract In the context of urban traffic control, traffic signal optimisation is the problem of determining the optimal green length for each signal in a set of traffic signals. The literature has effectively tackled such a problem, mostly with automated planning techniques leveraging the PDDL + language and solvers. However, such language has limitations when it comes to specifying optimisation statements and computing optimal plans. In this paper, we provide an alternative solution to the traffic signal optimisation problem based on Constraint Answer Set Programming (CASP). We devise an encoding in a CASP language, which is then solved by means of clingcon 3, a system extending the well-known ASP solver clingo. We performed experiments on real historical data from the town of Huddersfield in the UK, comparing our approach to the PDDL+ model that obtained the best results for the considered benchmark. The results showed the potential of our approach for tackling the traffic signal optimisation problem and improving the solution quality of the PDDL + plans.
- Research Article
- 10.1613/jair.1.18404
- Aug 7, 2025
- Journal of Artificial Intelligence Research
- Daphne Odekerken + 3 more
Reasoning under incomplete information is an important research direction in the study of computational argumentation. Most advances in this direction so far have focused on abstract argumentation frameworks. In particular, development of computational approaches to reasoning under incomplete information in structured formalisms remains to a large extent a challenge. We address this challenge by studying the problems of determining stability and relevance—with the aim of analyzing aspects of resilience of acceptance statuses in light of new information—in the central structured formalism of ASPIC+ . The specific ASPIC+ instantiation and grounded argumentation semantics we focus on are motivated by current applications in criminal investigation at the Netherlands Police. Our contributions consist of a theoretical analysis of the complexity of deciding stability and relevance as well as first exact algorithms for reasoning about stability and relevance in incomplete ASPIC+ theories. In terms of complexity results, we show that deciding stability is coNP-complete for incomplete ASPIC+ when assuming a preference ordering on defeasible rules via the last-link ordering, while deciding relevance is significantly more complex, namely NP^NP-complete. Complementing the complexity results, we develop practical algorithms for deciding stability and relevance based on the declarative paradigm of answer set programming (ASP). Furthermore, we provide an open-source implementation of the algorithms, and show empirically that the implementation exhibits promising scalability on both real-world and synthetic data. Our exact approach to stability is competitive with a previously proposed inexact approach, and the run times of our algorithms for both stability and relevance are sufficiently low on real-world data to be used in online settings.
- Research Article
- 10.46298/lmcs-21(3:16)2025
- Aug 7, 2025
- Logical Methods in Computer Science
- Mohammed M S El-Kholany + 2 more
The Job-shop Scheduling Problem (JSP) is a well-known and challenging combinatorial optimization problem in which tasks sharing a machine are to be arranged in a sequence such that encompassing jobs can be completed as early as possible. In this paper, we investigate problem decomposition into time windows whose operations can be successively scheduled and optimized by means of multi-shot Answer Set Programming (ASP) solving. From a computational perspective, decomposition aims to split highly complex scheduling tasks into better manageable subproblems with a balanced number of operations such that good-quality or even optimal partial solutions can be reliably found in a small fraction of runtime. We devise and investigate a variety of decomposition strategies in terms of the number and size of time windows as well as heuristics for choosing their operations. Moreover, we incorporate time window overlapping and compression techniques into the iterative scheduling process to counteract optimization limitations due to the restriction to window-wise partial schedules. Our experiments on different JSP benchmark sets show that successive optimization by multi-shot ASP solving leads to substantially better schedules within tight runtime limits than single-shot optimization on the full problem. In particular, we find that decomposing initial solutions obtained with proficient heuristic methods into time windows leads to improved solution quality.
- Research Article
- 10.32628/cseit25111672
- Aug 4, 2025
- International Journal of Scientific Research in Computer Science, Engineering and Information Technology
- Emmanuel Mgbeahuruike + 5 more
Task allocation is a well-known optimization problem that has been widely addressed using techniques such as Integer Programming (IP) and nature-inspired algorithms. However, many existing approaches lack flexibility and contextual awareness, especially in scenarios requiring fairness and hierarchical compliance. This research proposes an optimized Answer Set Programming (ASP) model for the Fair Hierarchical Task Allocation (FHTA) problem. The model incorporates knowledge-based reasoning to support context-specific constraints and generates stable models (answer sets) that satisfy both fairness and organizational hierarchy. A generate-and-test methodology is employed, wherein candidate solutions are produced and evaluated against a set of hard and soft constraints. A realistic problem scenario involving academic task assignment was formalized in ASP, using the Potassco toolkit (Clingo). The performance of the model was evaluated across varying problem sizes, specifically by changing the number of tasks and personnel involved. Metrics such as CPU time and total runtime were recorded. The results show that the ASP model performs efficiently for moderately sized instances and effectively achieves fair and hierarchical task allocation. This work demonstrates that ASP provides a scalable, explainable, and flexible framework for solving complex task allocation problems in hierarchical organizations.
- Research Article
- 10.1613/jair.1.17422
- Jul 6, 2025
- Journal of Artificial Intelligence Research
- Reijo Jaakkola + 4 more
We conceptualize explainability in terms of logic and formula size, giving a number of related definitions of explainability in a very general setting. Our main interest is the so-called local explanation problem which aims to explain the truth value of an input formula in an input model. The explanation is a formula of minimal size that (1) obtains the same truth value as the input formula on the input model and (2) transmits that truth value to the input formula globally, i.e., on every model. As an important example case, we study propositional logic in this setting and show that the local explainability problem is complete for the second level of the polynomial hierarchy. The hardness result holds already for DNF-formulas. We also give parameterized versions of these problems leading to NP-completeness. The generality of our definitions allows us to lift complexity results also, e.g., to S5 modal logic and ensembles of decision trees. We also provide an implementation in answer set programming and investigate its capacity in relation to explaining answers to the n-queens and dominating set problems. Furthermore, we give an example of explaining the behavior of a black-box classifier.
- Research Article
- 10.3390/aerospace12070605
- Jul 3, 2025
- Aerospace
- Jeongseok Kim + 1 more
The goal of this paper is to optimize mission schedules for vertical airports (vertiports in short) to satisfy the different needs of stakeholders. We model the problem as a resource-constrained project scheduling problem (RCPSP) to obtain the best resource allocation and schedule. As a new approach to solving the RCPSP, we propose answer set programming (ASP). This is in contrast to the existing research using MILP as a solution to the RCPSP. Our approach can take complex scheduling restrictions and stakeholder-specific requirements. In addition, we formalize and include stakeholder needs using a knowledge representation and reasoning framework. Our experiments show that the proposed method can generate practical schedules that reflect what stakeholders actually need. In particular, we show that our approach can compute optimal schedules more efficiently and flexibly than previous approaches. We believe that this approach is suitable for the dynamic and complex environments of vertiports.
- Research Article
2
- 10.1007/s10994-025-06780-7
- May 20, 2025
- Machine Learning
- Damiano Azzolini
The goal of inductive logic programming is to learn a logic program that models the examples provided as input. The search space of the possible programs is constrained by a language bias, which defines the atoms and literals allowed in rules. Answer set programming is a powerful formalism to represent complex combinatorial domains, also thanks to syntactic constructs such as aggregates. However, learning answer set programs from data is challenging, and often existing tools do not support the specification of aggregates in the language bias. In this paper, we introduce GENTIANS, a tool based on a genetic algorithm to learn answer set programs possibly with aggregates, arithmetic, and comparison operators, from examples. Empirical results, also against an existing solver, show that GENTIANS is able to provide accurate solutions even when the search space contains millions of clauses. Additionally, experiments on noisy datasets show the effectiveness of our approach.
- Research Article
- 10.32473/flairs.38.1.138664
- May 14, 2025
- The International FLAIRS Conference Proceedings
- Marco Wilhelm + 2 more
Answer set programming (ASP) and conditional reasoning are powerful KR formalisms capable of expressing default statements that usually hold but also allow for exceptions. While ASP excels with an intuitive rule-based syntax, fast solvers, and is suited to solve complex combinatorial search problems, conditionals provide a sophisticated preference-based semantics and yield principled inferences. In this paper, we investigate and compare different computational approaches on utilizing conditional background knowledge in order to prioritize the solutions of ASP programs. For this, we compile the specification of the System Z ranking model of conditionals into ASP constraints and, therewith, integrate the guidelines for prioritization according to System Z directly into the ASP programs.
- Research Article
- 10.1017/s1471068425000067
- May 1, 2025
- Theory and Practice of Logic Programming
- Francesco Calimeri + 4 more
Abstract DLV2 is an AI tool for knowledge representation and reasoning that supports answer set programming (ASP) – a logic-based declarative formalism, successfully used in both academic and industrial applications. Given a logic program modeling a computational problem, an execution of DLV2 produces the so-called answer sets that correspond one-to-one to the solutions to the problem at hand. The computational process of DLV2 relies on the typical ground & solve approach, where the grounding step transforms the input program into a new, equivalent ground program, and the subsequent solving step applies propositional algorithms to search for the answer sets. Recently, emerging applications in contexts such as stream reasoning and event processing created a demand for multi-shot reasoning: here, the system is expected to be reactive while repeatedly executed over rapidly changing data. In this work, we present a new incremental reasoner obtained from the evolution of DLV2 toward iterated reasoning. Rather than restarting the computation from scratch, the system remains alive across repeated shots, and it incrementally handles the internal grounding process. At each shot, the system reuses previous computations for building and maintaining a large, more general ground program, from which a smaller yet equivalent portion is determined and used for computing answer sets. Notably, the incremental process is performed in a completely transparent fashion for the user. We describe the system, its usage, its applicability, and performance in some practically relevant domains.
- Research Article
1
- 10.1007/s10845-025-02605-5
- Apr 23, 2025
- Journal of Intelligent Manufacturing
- Harkiran Sahota + 1 more
Abstract In the manufacturing industry the step of generating the assembly plans for products is crucial but very time and labour intensive as most of the work has to be done manually. Especially for products with high customisability this means, a new plan is required for every custom order. To reduce the complexity, a new hybrid approach for the generation of these assembly plans is introduced, based on knowledge representation and reasoning. The focus of the approach is to be deployable in real-world production instead of only focusing on research environments. Therefore, a knowledge base is generated, specifying the preconditions and details about different operations linked to geometrical features. To preserve these valuable information Answer Set Programming (ASP) is used to define rules. By this, the generic information of when to apply which operation can be separated from the product-specific information of how a product looks, and therefore, be stored centrally and reused for any product or even different applications without any changes. Performing a geometric analysis on the product model allows extracting important geometrical features, that can then be used to derive the necessary information from the corresponding rules. In all of this, the human intervention is crucial to cover even complex assemblies and to strengthen the acceptance of the method. To showcase the functionality of this approach, a simple reproducible example is given on the assembly of the IKEA Hyllis shelf as well as the PUBLIC Bikes Sprout bike.
- Research Article
- 10.1007/s10472-025-09981-x
- Apr 16, 2025
- Annals of Mathematics and Artificial Intelligence
- Katinka Becker + 1 more
Abstract Logical Analysis of Data (LAD) is a powerful technique for data classification based on partially defined Boolean functions. The decision rules for class prediction in LAD are formed out of patterns. According to different preferences in the classification problem, various pattern types have been defined. The generation of these patterns plays a key role in the LAD methodology and represents a computationally hard problem. In this article, we introduce a new approach to pattern generation in LAD based on Answer Set Programming (ASP), which can be applied to all common LAD pattern types.
- Research Article
- 10.1609/aaai.v39i14.33644
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
- Yusuf Izmirlioglu
We study reasoning about relative position, orientation and distance of moving objects in 2D space. We first construct a new hybrid calculus HOPA by augmenting qualitative distance and quantitative constraints into Oriented Point Relation Algebra (OPRA). Then we develop a framework for consistency checking and reasoning with HOPA using Answer Set Programming. This framework can also explain the source of inconsistency, infer new knowledge and generate a layout of objects and their orientation in the discrete space. The framework is capable of reasoning with (un)certain, heterogenous and presumed information. We evaluate efficiency and scalability of our method by computational experiments, and illustrate its applications with sample scenarios from robotic perception and marine navigation.
- Research Article
- 10.1609/aaai.v39i14.33633
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
- Jorge Fandinno + 1 more
This paper shows that the semantics of programs with aggregates implemented by the solvers clingo and dlv can be characterized as extended First-Order formulas with intensional functions in the logic of Here-and-There. Furthermore, this characterization can be used to study the strong equivalence of programs with aggregates under either semantics. We also present a transformation that reduces the task of checking strong equivalence to reasoning in classical First-Order logic, which serves as a foundation for automating this procedure.
- Research Article
- 10.1609/aaai.v39i14.33619
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
- Sebastian Adam + 1 more
Reinforcement learning is a widely used approach for training an agent to maximize rewards in a given environment. Action policies learned with this technique see a broad range of applications in practical areas like games, healthcare, robotics, or autonomous driving. However, enforcing ethical behavior or norms based on deontic constraints that the agent should adhere to during policy execution remains a complex challenge. Especially constraints that emerge after the training can necessitate to redo policy learning, which can be costly and, more critically, time-intense. In order to mitigate this problem, we present a framework for policy fixing in case of a norm violation, which allows the agent to stay operational. Based on answer set programming (ASP), emergency plans are generated that exclude or minimize cost of norm violations by future actions in a horizon of interest. By combining and developing optimization techniques, efficient policy fixing under real-time constraints can be achieved.
- Research Article
- 10.1609/aaai.v39i27.35134
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
- Daniele Meli + 2 more
Partially Observable Markov Decision Processes (POMDPs) are a powerful framework for planning under uncertainty. They allow to model state uncertainty as a belief probability distribution. Approximate solvers based on Monte Carlo sampling show great success to relax the computational demand and perform online planning. However, scaling to complex realistic domains with many actions and long planning horizons is still a major challenge, and a key point to achieve good performance is guiding the action-selection process with domain-dependent policy heuristics which are tailored for the specific application domain. We propose to learn high-quality heuristics from POMDP traces of executions generated by any solver. We convert the belief-action pairs to a logical semantics, and exploit data- and time-efficient Inductive Logic Programming (ILP) to generate interpretable belief-based policy specifications, which are then used as online heuristics. We evaluate thoroughly our methodology on two notoriously challenging POMDP problems, involving large action spaces and long planning horizons, namely, rocksample and pocman. Considering different state-of-the-art online POMDP solvers, including POMCP, DESPOT and AdaOPS, we show that learned heuristics expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specific heuristics within lower computational time. Moreover, they well generalize to more challenging scenarios not experienced in the training phase (e.g., increasing rocks and grid size in rocksample, incrementing the size of the map and the aggressivity of ghosts in pocman).