Deep neural networks have demonstrated their effectiveness for most machine learning tasks, with Intrusion Detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raise some concerns in applying deep neural networks in security-critical areas such as Intrusion Detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning based Intrusion Detection on the NSL-KDD dataset. Based on the implementation of deep neural networks using TensorFlow, we examine the vulnerabilities of neural networks under attacks on the IDS. To gain insights into the nature of Intrusion Detection and its attacks, we also explore the roles of individual features in generating adversarial examples.