Abstract
Deep Neural Networks are known for impressive results in a wide range of applications, being responsible for many advances in technology over the past few years. However, debugging and understanding neural networks models’ inner workings is a complex task, as there are several parameters and variables involved in every decision. Multidimensional projection techniques have been successfully adopted to display neural network hidden layer outputs in an explainable manner, but comparing different outputs often means overlapping projections or observing them side-by-side, presenting hurdles for users in properly conveying data flow. In this paper, we introduce a novel approach for comparing projections obtained from multiple stages in a neural network model and visualizing differences in data perception. Changes among projections are transformed into trajectories that, in turn, generate vector fields used to represent the general flow of information. This representation can then be used to create layouts that highlight new information about abstract structures identified by neural networks.
Highlights
Given their ability to abstract high-level patterns and model data beyond most heuristics [1], Deep Neural Networks (DNNs) are currently among the state-of-the-art techniques for the analysis of large-scale, complex datasets
One typical application of DNNs is data classification, which consists of inferring some model f : X → Y to correctly label unknown data based on a set of known labelled examples ( xi, yi ) ∈
We presented a new approach for projection-based Artificial Neural Networks (ANNs) hidden layer visualization that uses trajectories and vector fields to provide insights on how knowledge is generated in a DNN
Summary
Given their ability to abstract high-level patterns and model data beyond most heuristics [1], Deep Neural Networks (DNNs) are currently among the state-of-the-art techniques for the analysis of large-scale, complex datasets. The currently available techniques are somewhat limited when exploring sequential processes inside neural networks, such as the state of hidden layers during training or the shaping of high-level representations as data flows through different layers of a network. This information can provide insights to important issues, such as determining different layers’ contributions in discriminating certain objects or tracking noteworthy behavior during training, to better understand unusual data.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have