Abstract
Self Organizing Maps (SOMs) are neural networks that have been widely focused in computer vision applications and simulation of visual cortex areas. From among those applications, there are successful works related to neuroinspired motion processing. In this work, we propose the Retinotopic SOM (RESOM); a neural network based on Self-Organizing Retinotopic Maps, which was applied in dynamic segmentation using background modeling. Every neuron in the network has a set of retinotopic weigths similar to the projections from the retina to primary visual cortex. The hebbian learning of the RESOM makes in every neuron a global modeling of a frame from a video sequence, causing different reconstructions of the frame where the common pattern learned, defined in all neurons, is the video static information. Thus, the background is modeled finding the expected value of all neurons and inhibit the differences in the pattern weights of the neurons. In order to obtain a dynamic segmentation, we use the background subtraction method. Experimental results over real videos taken with stationary cameras showed that the RESOM has good performance to segment dynamic objects in video sequences and it is robust to illumination changes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.