Abstract
We propose an explanatory mechanism for multilayered neural networks (NN). In spite of the effective learning capability as a uniform function approximator, the multilayered NN suffers from unreadability, i.e., it is difficult for the user to interpret or understand the "knowledge" that the NN has by looking at the connection weights and thresholds obtained by backpropagation (BP). This unreadability comes from the distributed nature of the knowledge representation in the NN. In this paper, we propose a method that reorganizes the distributed knowledge in the NN to extract approximate classification rules. Our rule extraction method is based on the analysis of the function that the NN has learned, rather than on the direct interpretation of connection weights as correlation information. More specifically, our method divides the input space into "monotonic regions" where a monotonic region is a set of input patterns that belongs to the same class with the same sensitivity pattern. Approximate classification rules are generated by projecting these monotonic regions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.