Abstract

This paper describes a learning multiple-valued logic (MVL) network based on back propagation. The learning MVL network is derived directly from a canonical realization of MVL functions and therefore its functional completeness is guaranteed. We extend traditional back propagation to include the prior human knowledge on the MVL networks, for example, the architecture and the number of hidden units and layers. The prior knowledge from the MVL canonical form can be used as initial parameters of the learning MVL network in its learning process. As a result, the prior knowledge can guide the back propagation learning process to get started from a point in the parameter space that is not far from the optimal one, thus, back propagation can fine-tune the prior knowledge for achieving a desired output. This cooperative relation between the prior knowledge and the back propagation learning process is not always present in neural networks. Simulation results are also given to confirm the effectiveness of the methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.