Abstract

Abnormal detection in surveillance cameras never fails to be a crucial and challenging task across many fields, aimed at protecting the property safety of citizens and society from abnormal events. Currently, the most common solution for abnormal detection proves to be the combination of autoencoder models and unsupervised training scheme. Due to the scarcity of abnormal samples, all training samples are typically treated as normal samples when constructing the autoencoder calculation models, which generate reconstructed video frames with the same dimension as input data. In this approach, the identification of testing samples is determined by calculating the reconstruction errors between raw and reconstructed video frames. However, such calculation frameworks turn out to be unpractical for nuclear security, as the threshold value of reconstruction errors is overly sensitive, often leading to extreme and unacceptable results. In this manuscript, a novel framework for abnormal detection in nuclear security is proposed. Unlike common unsupervised training scheme, this approach constructs two label-specific autoencoders, while one is trained exclusively with normal samples and the other one is only fed with abnormal samples. Therefore, there is no need to predefine the threshold value for reconstruction errors, as it is replaced by comparing the two label-specific reconstruction errors. Besides, two training schemes for the label-specific autoencoders are proposed and analyzed. Based on a self-collected dataset for nuclear security, the results show that the proposed framework is both practical and appliable, with both training schemes achieving an acceptable accuracy of 0.8182.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.