Detecting drones is increasingly challenging, particularly when developing passive and low-cost defense systems capable of countering malicious attacks in environments with high levels of darkness and severe weather conditions. This research addresses the problem of drone detection under varying darkness levels by conducting an extensive study using deep learning models. Specifically, the study evaluates the performance of three advanced models: Yolov8, Vision Transformers (ViT), and Long Short-Term Memory (LSTM) networks. The primary focus is on how these models perform under synthetic darkness conditions, ranging from 20% to 80%, using a composite dataset (CONNECT-M) that simulates nighttime scenarios. The methodology involves applying transfer learning to enhance the base models, creating Yolov8-T, ViT-T, and LSTM-T variants. These models are then tested across multiple datasets with varying darkness levels. The results reveal that all models experience a decline in performance as darkness increases, as measured by Precision-Recall and ROC Curves. However, the transfer learning-enhanced models consistently outperform their original counterparts. Notably, Yolov8-T demonstrates the most robust performance, maintaining higher accuracy across all darkness levels. Despite the general decline in performance with increasing darkness, each model achieves an accuracy above 0.6 for data subjected to 60% or greater darkness. The findings highlight the challenges of drone detection under low-light conditions and emphasize the effectiveness of transfer learning in improving model resilience. The research suggests further exploration into multi-modal systems that combine audio and optical methods to enhance detection capabilities in diverse environmental settings.