Abstract

This paper explains the way of unification of flame and smoke detection algorithms by merging the common steps into a single processing flow. Scenario, discussed in the current manuscript, considers using fixed surveillance cameras that allows using background subtraction to detect changes in a scene. Due to imperfection of background subtraction, foreground pixels, belonging to the same real object, are often separated. These pixels are united by morphological operations. All pixels are then labeled by connected components labeling algorithm, and tiny objects are removed since noticeable smoke and flames are to be detected. All the previous steps are processed only once, and then separate smoke and flame parts are started which use the same input image obtained after removing tiny objects. Smoke detection includes color probability, boundary roughness, edge density, and area variability filtering processes. Flame detection uses color probability, boundary roughness, and area variability filtering. Preliminary results show that applying unification to smoke and flame detection algorithms makes processing time similar to a single smoke detection algorithm if smoke and flame are processed in parallel. If the whole algorithm is implemented on a single thread, processing time is still lower comparing to running smoke and fire detection without unification. The result of unified processing part can also be used as input for multiple tasks of intelligent surveillance systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call