Deep neural networks (DNNs) have demonstrated promising performances in handling complex real-world scenarios, surpassing human intelligence. Despite their exciting performances, DNNs are not robust against adversarial attacks. They are specifically vulnerable to data poisoning attacks where attackers meddle with the initial training data, despite the multiple defensive methods available, such as defensive distillation. However, defensive distillation has shown promising results in robustifying image classification deep learning (DL) models against adversarial attacks at the inference level, but they remain vulnerable to data poisoning attacks. This work incorporates a data denoising and reconstruction framework with a defensive distillation methodology to defend against such attacks. We leverage a denoising autoencoder (DAE) to develop a data reconstruction and filtering pipeline with a well-designed reconstruction threshold. We added carefully created adversarial examples to the initial training data to assess the proposed method's performance. Our experimental findings demonstrate that the proposed methodology significantly reduced the vulnerability of the defensive distillation framework to a data poison attack.