Abstract
Very little research into the automated monitoring of unorganised stores has been developed to date. 80% of current warehouses are manually operated, but only 5% are totally automated. The work presented in this paper contributes with a new point of view regarding the store monitoring field related to the reconstruction of semantic 3D models of indoor scenes. Our system collects dense coloured points of the scene from different positions of a 3D laser scanner. Firstly, the accumulated point cloud is segmented into merchandise points and building-structure points using the MLSAC algorithm. The merchandise points are again segmented and classified according to a particular object model database. The system has been trained using a supervised learning algorithm with patterns based on the shape and colour of the merchandise object. The output is a 3D semantic model of the scene in which the recognised objects are placed in precise positions. An assessment of the storehouse in terms of stock, occupied/non-occupied volumes and a suggestion of the placement of new incoming packages is also carried out. The scanning platform is composed of a mobile robot carrying a 3D laser scanner and a DLSR camera that follows a pre-established path for each specific scenario. This approach has been tested with real data and evaluated against a ground-truth model in simulated scenarios, and has been shown to provide high recognition rates and low positioning errors.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have