Abstract

Video smoke detection is a promising method for early fire prevention. However, it is still a challenging task for application of video smoke detection in real-world detection systems, as the limitations of smoke image samples for training and lack of efficient detection algorithm. This paper proposes a method based on two state-of-the-art fast detectors, a single-shot multi-box detector, and a multi-scale deep convolutional neural network, for smoke detection using synthetic smoke image samples. The virtual data can automatically offer rich samples with ground truth annotations. However, the learning of smoke representation in the detectors will be restricted by the appearance gap between real and synthetic smoke samples, which will cause a significant performance drop. To train a strong detector with synthetic smoke samples, we incorporate the domain adaptation into the fast detectors. A series of branches with the same structure as the detection branches are integrated into the fast detectors for domain adaptation. We design an adversarial training strategy to optimize the model of the adapted detectors, to learn a domain-invariant representation for smoke detection. The domain discrimination, domain confusion, and detection are combined in the iterative training procedure. The performance of the proposed approach surpasses the original baseline in our experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call