Abstract

In data analytics, system modeling, and decision-making, the aspects of interpretability and explainability are of paramount relevance; one can refer here to explainable Artificial Intelligence (XAI). The increasing complexity of systems one has to cope with, distributed nature of data with an ultimate concern about privacy and security of data and models are other challenges present in system modeling. With the proliferation of mobile devices, distributed data, and security and privacy restrictions, federated learning becomes a feasible development alternative. We advocate that there are two factors that immensely contribute to the realization of the above important requirements, namely, (i) a suitable level of abstraction along with its hierarchical aspects in describing the problem and (ii) a logic fabric of the resultant constructs. It is demonstrated that their conceptualization and the following realization can be conveniently carried out with the use of information granules (for example, fuzzy sets, sets, rough sets, and alike). Information granules are building blocks forming the interpretable environment capturing the essence of data and revealing key relationships existing there. Their emergence is supported by a systematic and focused analysis of data. At the same time, their initialization is specified by stakeholders or/and the owners and users of data. We present a comprehensive discussion of a design of information granules and their description by engaging an innovative mechanism of federated unsupervised learning in which information granules are constructed and refined with the use of collaborate schemes of clustering. For illustrative reasons, the study will be focused on the timely issues of interpretability and federated learning in the context of functional rule-based models with the rules in the form “if <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$x$</tex> is <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$A$</tex> then <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$y=f(x)$</tex> ” with the condition parts described by information granules. The interpretability mechanisms are aimed at a systematic elevation of interpretability of the conditions and conclusions of the rules. It is shown that augmenting interpretability of conditions is achieved by (i) decomposing a multivariable information granule into its one-dimensional components, (ii) delivering their symbolic characterization, and (iii) carrying out a process of linguistic approximation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call