Abstract

By the opacity of neural networks is meant that it has not been possible to derive any clear logical relationship between their interior configuration and their external behavior except in a few special cases. Opacity has seriously hindered the practical use of neural networks in real world control systems where the assurance of correct performance under all conditions is essential and where a rational causal explanation of the system's behavior is at least highly desirable. The disadvantage of neural networks' opacity is aggravated by the desire to gain the benefits of their ability to adapt or learn online. This paper outlines the theoretical and practical bases for the problem of neural network opacity and describes some current research directed toward overcoming it. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call