Over the past years, numerous studies demonstrated the vulnerability of deep neural networks (DNNs) to make correct classifications in the presence of small noise. This motivated the formal analysis of DNNs to ensure they delineate acceptable behavior. However, in case the DNN’s behavior is unacceptable for the desired application, these qualitative approaches are ill-equipped to determine the precise degree to which the DNN behaves unacceptably. Towards this, we propose a novel quantitative DNN analysis framework, QuanDA, which does not only check if the DNN delineates certain behavior, but also provides the estimated probability of the DNN to delineate this particular behavior. Unlike the (few) available quantitative DNN analysis frameworks, QuanDA does not use any implicit assumptions on the probability distribution of the hidden nodes, which enables the framework to propagate close to real probability distributions of the hidden node values to each proceeding DNN layer. Furthermore, our framework leverages CUDA to parallelize the analysis, enabling high-speed GPU implementation for fast analysis. The applicability of the framework is demonstrated using the ACAS Xu benchmark, to provide reachability probability estimates for all network nodes. Moreover, this paper also provides potential applications of QuanDA for the analysis of the DNN safety properties.