A significant threat to the recent, wide deployment of machine learning-based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbations to a test pattern (e.g., an image) that induce a change in the classifier's decision, until recently there has been a relative paucity of work defending against such attacks. Some works robustify the classifier to make correct decisions on perturbed patterns. This is an important objective for some applications and for natural adversary scenarios. However, we analyze the possible digital evasion attack mechanisms and show that in some important cases, when the pattern (image) has been attacked, correctly classifying it has no utility---when the image to be attacked is (even arbitrarily) selected from the attacker's cache and when the sole recipient of the classifier's decision is the attacker. Moreover, in some application domains and scenarios, it is highly actionable to detect the attack irrespective of correctly classifying in the face of it (with classification still performed if no attack is detected). We hypothesize that adversarial perturbations are machine detectable even if they are small. We propose a purely unsupervised anomaly detector (AD) that, unlike previous works, (1) models the joint density of a deep layer using highly suitable null hypothesis density models (matched in particular to the nonnegative support for rectified linear unit (ReLU) layers); (2) exploits multiple DNN layers; and (3) leverages a source and destination class concept, source class uncertainty, the class confusion matrix, and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence. Tested on MNIST and CIFAR image databases under three prominent attack strategies, our approach outperforms previous detection methods, achieving strong receiver operating characteristic area under the curve detection accuracy on two attacks and better accuracy than recently reported for a variety of methods on the strongest (CW) attack. We also evaluate a fully white box attack on our system and demonstrate that our method can be leveraged to strong effect in detecting reverse engineering attacks. Finally, we evaluate other important performance measures such as classification accuracy versus true detection rate and multiple measures versus attack strength.