Abstract

Object detection is to recognize and locate objects in images, which is one of the fundamental problems in computer vision. It has been developed rapidly in the past decades and has a wide range of applications, e.g., surveillance systems, autonomous vehicles, medical imaging diagnosis, and industrial automation systems. Since the technique of object detection has been widely adopted, the lack of robustness of object detectors can be problematic, which may lead to unpredictable safety and financial losses. The main goal of this thesis is to study the robustness of object detectors and develop robust detectors.Robust object detectors can be viewed as the detectors that have the ability to maintain high accuracy when encountering the challenging conditions that may degrade the performance. To develop robust object detectors, we study the robustness of object detection by understanding when and where the detectors may fail. In this thesis, we focus on the failure cases in three aspects: false positives, attacks, and domain adaptability.False positives are the predictions that do not match the ground truth bounding boxes. The robustness to false positives is crucial for object detection, particularly for face detection, as face detection problem is the very first step for facial analysis tasks. The existence of false positives will waste the computations and harm the accuracy of the subsequence processes. To make the current detectors more robust, this thesis focuses on reducing the number of false positives with the detection rate well-maintained. To achieve this, we first explore different post-processing false positive classifiers to remove false positives. We further study the properties of true and false positives among face detectors and propose a framework that cascades two off-the-shelf detectors. The cascaded detector demonstrated its effectiveness and efficiency on false positive removal.Attacks are one of the biggest threats of a robust detection system, especially for surveillance systems. For example, criminals can apply attacks to the surveillance system for preventing themselves from being detected. Adversarial attacks are the attacks that can change the neural network output significantly by adding tiny, imperceptible perturbations onto the image. We examine how robust the current detectors are against adversarial attacks and provide theoretical explanations for why existing adversarial attack methods fail. We successfully perform adversarial attacks on deep learning based detection networks.Due to the variations in shape and appearance, lighting conditions and backgrounds, a model trained on the source data might not perform well on the target data, which is often known as domain discrepancy. Supervised domain adaptation methods require a large number of annotated bounding boxes for object detection, which is time-consuming and expensive. Hence, an effective model that can adapt object detectors into a new domain without labels is highly desirable. In this thesis, we propose an adaptation method to address unsupervised domain adaptation for object detection by forward and backward cyclic adaptation. The effectiveness of our proposed method is validated on four domain-shift scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call