Abstract

Feature analysis and selection are highly considered topics in deep learning (DL) real-world applications. However, most existing methods are manual and lack of deep insights of training mechanisms. This is because DL is often viewed as a “black box” and the mechanisms providing the output are often hidden from the user and difficult to understand. Some scientists have utilized visualization, sensitivity analysis, and adversary attack machine learning to increase transparency and have demonstrated successful methods in understanding DL, especially related to convolutional neural nets. This paper builds on these methods and focuses on deep belief networks (DBNs), with two training stages: unsupervised learning and supervised learning. First, a novel algorithm named Visual Input-neuron Importance (Vi-II), based on visualization and feature importance criterion are proposed, to calculate changes in the importance of the input features. Second, a criterion named Visual Hidden-layer Importance (Vi-HI) is proposed to dynamically display the contributions of each hidden layer. Third, a novel framework is put forward by combing the two techniques together, to determine the final structure (input and hidden layers) of DBN, for both unsupervised training and supervised training stages. Then, an application based on the analysis of a road safety performance function is demonstrated. The proposed method provides an accurate description of the model’s inner workings, identifies significant features, and eliminates irrelevant features. At last, the revised dataset and optimized model structure are used for car collision prediction; the result demonstrates that the revised model achieves much better performance than comparable methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call