Abstract

In this research, we are considering the use of the inverse perspective transformation in video surveillance applications that observe (and possible influence) scenes consisting of moving and stationary objects; e.g., people on a parking area. In previous research, objects were detected on video streams and identified as moving or stationary. Subsequently, distance maps were generated by the Fast Exact Euclidean Distance (FEED) transformation, which uses frame-to-frame information to generate distance maps for video frames in a fast manner. From the resulting distance maps, different kinds of surveillance parameters can be derived. The camera was placed above the scene, and hence, no inverse perspective transformation was needed. In this work,the case is considered the case that the camera is placed under an arbitrary angle on the side of the scene, which might be a more feasible placement than on the top. It will be shown that an image taken from a camera on the side can be easily and fast converted to an image as would be taken by a camera on the top. The allows the use of the previously developed methods after converting each frame of a video stream or only objects of interest detected on them.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.