Abstract

Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison.

Highlights

  • Uncertainty associated to incomplete, imprecise or unreliable knowledge is inevitable in daily reasoning and in many real-world contexts

  • 35 The aim of this study is to empirically evaluate the inferential capacity of defeasible argumentation models when compared to other models produced by other well established reasoning approaches, in this case non-monotonic fuzzy reasoning and expert systems

  • A precise research question can be set: “To what extent does the inferential capacity of defeasible argumentation differ from non-monotonic fuzzy reasoning and expert systems in terms of validity and sensitivity when applied to the problem of 55 mental workload modelling?”

Read more

Summary

Introduction

Uncertainty associated to incomplete, imprecise or unreliable knowledge is inevitable in daily reasoning and in many real-world contexts. 35 The aim of this study is to empirically evaluate the inferential capacity of defeasible argumentation models when compared to other models produced by other well established reasoning approaches, in this case non-monotonic fuzzy reasoning and expert systems This evaluation can clarify the predictive accuracy of the investigated reasoning models, allowing defeasible argumentation to 40 be better situated among similar reasoning approaches and enabling different applications and experiments to be carried out. A precise research question can be set: “To what extent does the inferential capacity of defeasible argumentation differ from non-monotonic fuzzy reasoning and expert systems in terms of validity and sensitivity when applied to the problem of 55 mental workload modelling?”.

Literature and related work
440 3. Design and methodology
Objective
Non-monotonic fuzzy reasoning
Layer 1 - Definition of the internal structure of arguments
Conclusion
Summary of models and comparative metrics
Results and discussion
A2 A3 A4
Sensitivity
FL21 FL19 FL23 FC20 FC22 FC24
Internal configurations of models and interpretations
Conclusion and future work
1270 Acknowledgements
E2 E3 E4 E5 E6 E7 E8
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call