Abstract

A prominent problem in computer vision is occlusion, which occurs when an object's key features temporarily disappear behind another crossing body, causing the computer to struggle with image detection. While the human brain is capable of compensating for the invisible parts of the blocked object, computers lack such scene interpretation skills. Cloud computing using convolutional neural networks is typically the method of choice for handling such a scenario. However, for mobile applications where energy consumption and computational costs are critical, cloud computing should be minimized. In this regard, a computer vision sensor capable of efficiently detecting and tracking covered objects without heavy reliance on occlusion‐handling software is proposed. The edge‐computing sensor accomplishes this task by self‐learning the object prior to the moment of occlusion and uses this information to “reconstruct” the blocked invisible features. Furthermore, the sensor is capable of tracking a moving object by predicting the path it will most likely take while traveling out of sight behind an obstructing body. Finally, sensor operation is demonstrated by exposing the device to various simulated occlusion events. An interactive preprint version of the article can be found at DOI https://www.authorea.com/doi/full/10.22541/au.164192000.02603887/.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.