Abstract

Self Organizing Maps (SOMs) are neural networks that have been widely focused in computer vision applications and simulation of visual cortex areas. From among those applications, there are successful works related to neuroinspired motion processing. In this work, we propose the Retinotopic SOM (RESOM); a neural network based on Self-Organizing Retinotopic Maps, which was applied in dynamic segmentation using background modeling. Every neuron in the network has a set of retinotopic weigths similar to the projections from the retina to primary visual cortex. The hebbian learning of the RESOM makes in every neuron a global modeling of a frame from a video sequence, causing different reconstructions of the frame where the common pattern learned, defined in all neurons, is the video static information. Thus, the background is modeled finding the expected value of all neurons and inhibit the differences in the pattern weights of the neurons. In order to obtain a dynamic segmentation, we use the background subtraction method. Experimental results over real videos taken with stationary cameras showed that the RESOM has good performance to segment dynamic objects in video sequences and it is robust to illumination changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call