Abstract
Automatic detection of empty spaces (gaps) between the displayed products as seen in the images of shelves of a supermarket is an interesting image segmentation problem. This paper presents the first known attempt to solve this commercially relevant challenge. The shelf image is first over-segmented into a number of superpixels to construct a graph of superpixels (SG). Subsequently, a graph convolutional network and a Siamese network are built to process the SG. Finally, a structural support vector machine based inference model is formulated based on SG for segmenting the gap and non-gap regions. In order to validate our method, we manually annotate the images of shelves of three benchmark datasets of retail products. We have achieved ∼70 to ∼85% segmentation accuracy (in terms of mean intersection-over-union) on the annotated datasets. A part of the annotated data is released at https://github.com/gapDetection/gapDetectionDatasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.