Solid-State Time-of-Flight Range Camera

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Solid-State Time-of-Flight Range Camera

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.5194/isprsannals-ii-5-w1-31-2013
RANGE AND PANORAMIC IMAGE FUSION INTO A TEXTURED RANGE IMAGE FOR CULTURE HERITAGE DOCUMENTATION
  • Jul 30, 2013
  • ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • Z Bila + 2 more

Abstract. This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.

  • Research Article
  • Cite Count Icon 25
  • 10.1016/j.patcog.2010.11.005
Multi-scale edge detection on range and intensity images
  • Nov 18, 2010
  • Pattern Recognition
  • S.A Coleman + 2 more

Multi-scale edge detection on range and intensity images

  • Research Article
  • Cite Count Icon 31
  • 10.1007/s11548-012-0694-5
Markerless estimation of patient orientation, posture and pose using range and pressure imaging
  • May 15, 2012
  • International Journal of Computer Assisted Radiology and Surgery
  • Robert Grimm + 4 more

In diagnostic tomographic imaging, patient setup and scanner initialization is a manual, tedious procedure in clinical practice. A fully-automatic detection of the patient's position, orientation, posture and pose on the patient table holds great potential for optimizing this part of the imaging workflow. We propose a markerless framework that is capable of extracting this information within seconds from either range imaging (RI) or pressure imaging (PI) data. The proposed method is composed of three stages: First, the position and orientation of the reclined patient are determined. Second, the patient's posture is classified. Third, based on the estimated orientation and posture, an approximate body pose is recovered by fitting an articulated model to the observed RI/PI data. Being a key issue for clinical application, our approach does not require an initialization pose. In a case study on real data from 16 subjects, the performance of the proposed system was evaluated quantitatively with a 3-D time-of-flight RI camera and a pressure sensing mattress (PI). The patient orientation was successfully determined for all subjects, independent of the modality. At the posture recognition stage, our method achieved mean classification rates of 79.4% for RI and 95.5% for PI data, respectively. Concerning the approximate body pose estimation, anatomical body landmarks were localized with an accuracy of ±5.84 cm (RI) and ±5.53 cm (PI). The results indicate that an estimation of the patient's position, orientation, posture and pose using RI and PI sensors, respectively, is feasible, and beneficial for optimizing the workflow in diagnostic tomographic imaging. Both modalities achieved comparable pose estimation results using different models that account for modality-specific characteristics. PI outperforms RI in discriminating between prone and supine postures due to the distinctive pressure distribution of the human body.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/s0031-3203(00)00124-2
Segmentation based on fusion of range and intensity images using robust trimmed methods
  • Jul 6, 2001
  • Pattern Recognition
  • In Su Chang + 1 more

Segmentation based on fusion of range and intensity images using robust trimmed methods

  • Conference Article
  • Cite Count Icon 27
  • 10.1109/ssiai.2008.4512276
Automated Facial Feature Detection from Portrait and Range Images
  • Mar 1, 2008
  • Sina Jahanbin + 2 more

We propose a novel technique to detect feature points from portrait and range representations of the face. In this technique, the appearance of each feature point is encoded using a set of Gabor wavelet responses extracted at multiple orientations and spatial frequencies. A vector of Gabor coefficients, called a jet, is computed at each pixel in the search window on a fiducial and compared with a set of jets, called a bunch, collected from a set of training data on the same type of fiducial. The desired feature point is located at the pixel whose jet is the most similar to the training bunch. This is the first time that Gabor wavelet responses were used to detect facial landmarks from range images. This method was tested on 1146 pairs of range and portrait images and high detection accuracies are achieved using a small number of training images. It is shown that co-localization using Gabor jets on range and portrait images resulted in better accuracy than using any single image modality. The obtained accuracies are competitive to that of other techniques in the literature.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.969362
High-Seed 3-D Vision System Using Range And Intensity Images Covering A Wide Area
  • Mar 21, 1989
  • Tetsuo Koezuka + 4 more

The 3-D vision system we developed uses laser scanning, and simultaneously produces range and intensity images covering a wide area. 3-D vision is indispensable in image processing for factory automation. Conventional, practical slit-light techniques using a TV camera have a limited narrow measurement area, take too long to accept input images, and cannot produce range and intensity images simultaneously. We developed a camera we call the 3-D imager and a vision system based on it. The 3-D imager uses a laser diode beam to scan the measured area and obtains range and intensity data at all points on the scan line. Range measurement is based on triangulation. The vision system, which consists of a 32-bit CPU (68020) and 12M-byte image memory, has three main features: (1) 3-D measurement covers 2048-by-3076-pixel image formed in one image input sequence. (2) Measurement is fast: The system takes 12 seconds to produce data for an entire 6-million-pixel area. (3) The system processes range and intensity data simultaneously. The 256-height-level range image is used to determine an object's shape, and the 256-gray-level intensity image to determine the surface texture, markings, and other features. When used to inspect PC boards, the system detected missing, shifted, and floating components. The inspection resolution is 125 pm in along the X and Y axes and 30 lam along the Z axis.

  • Conference Article
  • 10.1117/12.652923
Registration of laser range image of cortical surface to preoperative brain MR images for image-guided neurosurgery: preliminary results
  • Mar 2, 2006
  • Baigalmaa Tsagaan + 4 more

Neurosurgical navigation systems using preoperative images have a problem in their accuracy caused by brain deformation during surgery. To address this problem the use of laser range scanner in order to obtain intraoperative cortical surface, is under study in our currently developing neurosurgical navigation system. This paper presents preliminary results of registration of intraoperatively acquired range and color images to preoperative MR images, within the context of image-guided surgery. We register images by performing two procedures: mapping of color image on the range image; and registration between color-mapped range images and preoperative medical images. The color image is mapped on the range image using camera calibration. Point-based rigid registration of preoperative images to the intraoperative images is performed through detection and matching of common fiducials in the images. Experimental results using intraoperatively acquired range images of cortical surface demonstrated the ability to perform registrations for MR images of the brain. In the future, we will focus on incorporating the above registration results into a biomechanical model of the brain to predict brain deformation during surgical procedures.

  • Conference Article
  • 10.1117/12.154969
<title>Three-dimensional feature extraction using data fusion method</title>
  • Sep 3, 1993
  • Lei-Jian Liu + 2 more

As range images, obtained by active laser radar (LADAR), contain the 3-D information necessary for 3-D environment understanding, great attention has been attracted, in the field of computer vision, to the processing of the range images in order to get the 3-D features of the environment. Unfortunately, most of the already proposed processing methods of range images are extensively time-consuming. Therefore, the use of range images to obtain 3-D information about environment is largely limited. This presentation proposes a method, based on data fusion, to obtain 3-D features of polyhedrons using co-registered range and intensity images. First, feature points and edges of the candidate planes of the objects in the intensity image are acquired by analyzing the intensity variation. Then, the candidate 3-D vertices, edges, and planes in range image can be gotten using the correspondence of the two co- registered images. Next, the candidate planes are verified by computing and analyzing the curvatures and normals at some feature points and edges on the candidate planes in the range image. Finally, the verified candidates are regarded as actual planes of the sensed object, and are used to construct a hierarchical representation of the object. Experiment results on simulated data have been given to show the feasibility of the proposed approach.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/icip.2003.1246821
Facial expression analysis from 3D range images; comparison with the analysis from 2D images and their integration
  • Nov 24, 2003
  • T Yabui + 2 more

Even if facial expression analysis from 2D luminance images is the present mainstream, it has problems due to changes in facial pose and lighting. In this paper, we use 3D range images which do not maintain such problems for facial expression analysis. We first apply the subspace method to range and luminance images, and clarify their differences in image characteristics. Examining the validity of range images for facial expression analysis, we consider improvement in correct classification rates by integrating results from range and luminance images. We employ the linear combination for their integration and show experimental results.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/oceans.2002.1191864
Underwater target identification using GVF snake and zernike moments
  • Oct 29, 2002
  • Guozhi Tao + 2 more

This paper is focused on the development of robust object segmentation and shape-dependent feature extraction methods for automatic water target classification and identification using electro-optical imagery data. The sensor used for acquiring the data is the Streak Tube Imaging Lidar (STIL) that offers both range and contrast images with high resolution. In this paper, the gradient vector flow (GVF) snake is employed to segment the detected objects. The snake converges to the actual object boundary and provides a closed contour of the object even when some of the edges are missing. To reduce the distortion as a result of missing edges, the union of the binary silhouettes for contrast and the range images is obtained. Zernike moments are then computed for the combined silhouette of the segmented object. These moments provide shape-dependent features with high discriminatory ability, which are invariant to object rotation, translation and size scaling in the image. This set of features is then used for target identification from the STIL imagery data. To aid discrimination of different objects with potentially similar shape dependent features, mean and variance of the contrast and range images are also computed within the closed contour and then used as additional features for classification. Then the extracted features are applied to a multi-layer back-propagation neural network (BPNN) that performs target classification/identification. Different neural network structures are tried to determine the optimum classifier. The effectiveness of the developed algorithms is demonstrated on several data sets and the corresponding confusion matrices are also developed.

  • Conference Article
  • Cite Count Icon 27
  • 10.1109/icpr.1998.711908
Estimating pose of human face based on symmetry plane using range and intensity images
  • Aug 16, 1998
  • K Hattori + 2 more

Describes a high speed face measurement system and an algorithm for pose estimation of a human face using both color intensity and range images. The measurement system is designed so that we can obtain both color intensity and range images of a face without occlusion at high speed. Both color intensity and range images can be acquired within 37/60 seconds. The obtained face data are expressed as 3D wire-frame models with texture. Pose estimation is executed accurately by utilizing a property that the shape of face is almost symmetrical. Locations of the eyes and eyebrows are also detected through the pose estimation. The generated 3D face model and pose estimation results are shown. Both this system and technique are effective for 3D face modelling. Generated models are available for many applications such as virtual reality, man-machine interfaces and teleconferencing.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/icip.2001.959149
Edge-based image segmentation using curvature sign maps from reflectance and range images
  • Oct 7, 2001
  • L Silva + 2 more

A new approach to image segmentation by edge detection is proposed for preserving objects topology and shape while retrieving precisely located, one-pixel-wide edges. The method is based on mean (H) and Gaussian (K) surface curvatures sign maps (HK-sign maps) computed from both registered reflectance and range images, provided by a single sensor. HK-sign maps have been used to identify objects regions on range and intensity images, but not edges, as presented in this work. The combination of the computed range and reflectance edge maps has led to more accurate segmentation results than just by using either of them alone. The proposed algorithm has been tested on real images and compared to four traditional range image segmentation algorithms. Experimental results demonstrate the viability and usefulness of our approach.

  • Conference Article
  • Cite Count Icon 21
  • 10.1109/3dim.2005.3
3D Modeling of Outdoor Environments by Integrating Omnidirectional Range and Color Images
  • Jun 13, 2005
  • T Asai + 2 more

This paper describes a 3D modeling method for wide area outdoor environments which is based on integrating omnidirectional range and color images. In the proposed method, outdoor scenes can be efficiently digitized by an omnidirectional laser rangefinder which can obtain a 3D shape with high-accuracy and by an omnidirectional multi-camera system (OMS) which can capture a high-resolution color image. Multiple range images are registered by minimizing the distances between corresponding points in the different range images. In order to register multiple range images stably, points on plane portions detected from the range data are used in registration process. The position and orientation acquired by RTK-GPS and gyroscope are used as initial values of simultaneous registration. The 3D model obtained by registration of range data is mapped by textures selected from omnidirectional images in consideration of the resolution of texture and occlusions of the model. In experiments, we have carried out 3D modeling of our campus with the proposed method.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/robot.1991.131872
Geometric reasoning for world model representation based on planar patch with uncertainty from video range images
  • Apr 9, 1991
  • M Asada + 2 more

An approach to geometrical reasoning for world model representations that are based on planar surfaces from range and video images is described. The geometrical reasoning is regarded as an inferring process of the spatial extent of primal surfaces derived from the range image at an early stage. The inferring process has two subprocesses: expansion of a primal surface using a directional uncertainty defined by the moments around the axes on the plane fitted to it, and the determination of the boundary shape of the expanded surface using constraints on the spatial relationships between the observed data. Experimental results that were applied to road scenes in which the inferring process proved useful for the integration process of video and range image sequences are discussed. >

  • Research Article
  • Cite Count Icon 1
  • 10.1088/1742-6596/1098/1/012026
Nudged Elastic Band in Analysis of Range Image High-contrast Patches
  • Sep 1, 2018
  • Journal of Physics: Conference Series
  • Kewen Cha + 1 more

Range images are more interested in lately for several reasons, such as, range images could be used in object recognition; the scene geometry of the 3D world could be comprehended more effectively by using range images. As there is a distance between the laser scanner and the nearest object for each range image pixel, hence, every range image may be thought as a vector in a high dimensional space W. It is very difficult to study a set of range images X ⊆ W, because X has very high dimension and it is very sparse in W. An efficient way for analysing range images is to study the space of small range image patches. The nudged elastic band method is a main tool for searching minimum energy paths in computational chemistry. In this paper, Morse functions are created by the sampled data from range image high contrast small patches, and then one-dimensional cell complexes are built from the Morse functions by using the nudged elastic band method, topological features of range image data are detected by a sequence of cell complexes. Particularly, we experimentally show that there exist subspaces of high-contrast 3×3, 4×4, 6×6 and 7×7 range image patches whose homology is that of a circle.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.