Abstract

Real-time 3D scanning of a scene or object using multiple depth cameras is often required in many applications but is still a challenging task for the computer vision community, especially when the object or scene is partially occluded and dynamic. If active depth sensors are used in this case, their resulting depth map quality gets degraded due to interference between active radiations from each depth sensor. Passive 3D sensors like stereo cameras can avoid the issue of interference as they do not emit any active radiation, but they face correspondence problems. Since releasing the commodity depth sensor Microsoft Kinect, researchers are getting more interested in active depth-sensing. However, Kinect sensors have some easily noticeable limitations concerning 3D reconstruction such as: they can provide depth maps for a limited range, their field of view is restricted and holes are observed in the depth map due to occlusion. The above-mentioned limitations can be overcome if multiple Kinect sensors are used simultaneously instead of a single Kinect sensor. Still, the challenge here is to avoid interference between these sensors. We present a comprehensive review of possible solutions to avoid interference between multiple Kinect sensors. Furthermore, we introduce the Kinect technology in detail along with applications where multiple Kinect sensors are used in the literature. We expect that this paper will be helpful to the researchers who want to use multiple Kinect sensors in sharing the workplace in their research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call