In recent years, the growth of large-scale datasets has significantly propelled the progress of deep learning applications. Yet, annotating these datasets remains a labor-intensive endeavor, pushing the reliance on cost-effective but less specialized data collection methods and internet data sources. This often results in noisy and inaccurate labels, compromising data quality. Traditional machine learning models assume clean data, but real-world datasets often exhibit significant label noise. This paper examines the impact of such noise on object detection performance, a pivotal aspect of computer vision. We analyze the influence of noisy labels using three renowned object detection frameworks: YOLOv5, Faster R-CNN, and the recent YOLOv8, alongside established datasets: MS COCO, VOC, and ExDARK. Additionally, experiments with the UVM dataset explore domain-specific tasks in dense object scenarios. Two new metrics — Model Health and Detection Capability — were introduced to evaluate the results. Findings indicate that models maintain over 80% of their health (a 20% decline in mAP from the baseline) with up to 40% label corruption. However, Detection Capability deteriorates more sharply under the same conditions. The research also employs the D-RISE method for model explainability, highlighting crucial image regions affecting detection outcomes. Despite the noise, critical detection areas in models remain similar to those in clean data up to the 40% corruption level, as verified by similarity metrics. This study underscores the resilience of object detection models to label noise and provides insights into maintaining performance amidst data quality challenges.