Abstract

Time-of-Flight cameras are active sensors that are able to capture both the light intensity reflected by each observed point in the scene and the distance between these points and the camera. Enhancing intensity images with a depth modality enables capturing surfaces in 3D and boosts the applicability of these sensors. Nevertheless, high-level information still needs to be extracted from the data stream in order to accomplish high-level tasks, like recognition or classification. Ideally, the semantic gap between sensor output and high-level requirements should be as small as possible, in order to reduce both computational cost and failure probability. An additional depth modality helps in this regard, but there are further cues that can be seen by a ToF sensor that have remained underexploited so far. In this paper we take the first steps towards a trimodal ToF camera, which adds a valuable material modality to the classical intensity and depth modalities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.