Multi-scale edge detection on range and intensity images

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Multi-scale edge detection on range and intensity images

Similar Papers
  • Conference Article
  • Cite Count Icon 6
  • 10.1109/icip.2001.959149
Edge-based image segmentation using curvature sign maps from reflectance and range images
  • Oct 7, 2001
  • L Silva + 2 more

A new approach to image segmentation by edge detection is proposed for preserving objects topology and shape while retrieving precisely located, one-pixel-wide edges. The method is based on mean (H) and Gaussian (K) surface curvatures sign maps (HK-sign maps) computed from both registered reflectance and range images, provided by a single sensor. HK-sign maps have been used to identify objects regions on range and intensity images, but not edges, as presented in this work. The combination of the computed range and reflectance edge maps has led to more accurate segmentation results than just by using either of them alone. The proposed algorithm has been tested on real images and compared to four traditional range image segmentation algorithms. Experimental results demonstrate the viability and usefulness of our approach.

  • Research Article
  • Cite Count Icon 7
  • 10.1364/oe.514027
PE-RASP: range image stitching of photon-efficient imaging through reconstruction, alignment, stitching integration network based on intensity image priors.
  • Jan 12, 2024
  • Optics Express
  • Xu Yang + 6 more

Single photon imaging integrates advanced single photon detection technology with Laser Radar (LiDAR) technology, offering heightened sensitivity and precise time measurement. This approach finds extensive applications in biological imaging, remote sensing, and non-visual field imaging. Nevertheless, current single photon LiDAR systems encounter challenges such as low spatial resolution and a limited field of view in their intensity and range images due to constraints in the imaging detector hardware. To overcome these challenges, this study introduces a novel deep learning image stitching algorithm tailored for single photon imaging. Leveraging the robust feature extraction capabilities of neural networks and the richer feature information present in intensity images, the algorithm stitches range images based on intensity image priors. This innovative approach significantly enhances the spatial resolution and imaging range of single photon LiDAR systems. Simulation and experimental results demonstrate the effectiveness of the proposed method in generating high-quality stitched single-photon intensity images, and the range images exhibit comparable high quality when stitched with prior information from the intensity images.

  • Conference Article
  • 10.1117/12.154969
<title>Three-dimensional feature extraction using data fusion method</title>
  • Sep 3, 1993
  • Lei-Jian Liu + 2 more

As range images, obtained by active laser radar (LADAR), contain the 3-D information necessary for 3-D environment understanding, great attention has been attracted, in the field of computer vision, to the processing of the range images in order to get the 3-D features of the environment. Unfortunately, most of the already proposed processing methods of range images are extensively time-consuming. Therefore, the use of range images to obtain 3-D information about environment is largely limited. This presentation proposes a method, based on data fusion, to obtain 3-D features of polyhedrons using co-registered range and intensity images. First, feature points and edges of the candidate planes of the objects in the intensity image are acquired by analyzing the intensity variation. Then, the candidate 3-D vertices, edges, and planes in range image can be gotten using the correspondence of the two co- registered images. Next, the candidate planes are verified by computing and analyzing the curvatures and normals at some feature points and edges on the candidate planes in the range image. Finally, the verified candidates are regarded as actual planes of the sensed object, and are used to construct a hierarchical representation of the object. Experiment results on simulated data have been given to show the feasibility of the proposed approach.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • Research Article
  • Cite Count Icon 4
  • 10.3182/20110828-6-it-1002.01762
Motion Detection and Tracking Using the 3D-camera
  • Jan 1, 2011
  • IFAC Proceedings Volumes
  • Xiang Yin + 1 more

Motion Detection and Tracking Using the 3D-camera

  • Book Chapter
  • 10.1007/978-3-642-04146-4_97
An Adaptive Technique for Accurate Feature Extraction from Regular and Irregular Image Data
  • Jan 1, 2009
  • Sonya Coleman + 2 more

We present a single multi-scale gradient-based feature extraction algorithm that can be directly applied to irregular or regular image data and hence can be used on both range and intensity images. We illustrate the accuracy of this approach using the Figure of Merit evaluation technique on real images, demonstrating that the application of this approach to both range and intensity images is more accurate than the equivalent approach of applying a gradient operator, such as Sobel, to an intensity image and, separately, the scan-line approximation approach to range images.KeywordsRange DataGradient OperatorFeature Extraction

  • Conference Article
  • Cite Count Icon 27
  • 10.1109/icpr.1998.711908
Estimating pose of human face based on symmetry plane using range and intensity images
  • Aug 16, 1998
  • K Hattori + 2 more

Describes a high speed face measurement system and an algorithm for pose estimation of a human face using both color intensity and range images. The measurement system is designed so that we can obtain both color intensity and range images of a face without occlusion at high speed. Both color intensity and range images can be acquired within 37/60 seconds. The obtained face data are expressed as 3D wire-frame models with texture. Pose estimation is executed accurately by utilizing a property that the shape of face is almost symmetrical. Locations of the eyes and eyebrows are also detected through the pose estimation. The generated 3D face model and pose estimation results are shown. Both this system and technique are effective for 3D face modelling. Generated models are available for many applications such as virtual reality, man-machine interfaces and teleconferencing.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.969362
High-Seed 3-D Vision System Using Range And Intensity Images Covering A Wide Area
  • Mar 21, 1989
  • Tetsuo Koezuka + 4 more

The 3-D vision system we developed uses laser scanning, and simultaneously produces range and intensity images covering a wide area. 3-D vision is indispensable in image processing for factory automation. Conventional, practical slit-light techniques using a TV camera have a limited narrow measurement area, take too long to accept input images, and cannot produce range and intensity images simultaneously. We developed a camera we call the 3-D imager and a vision system based on it. The 3-D imager uses a laser diode beam to scan the measured area and obtains range and intensity data at all points on the scan line. Range measurement is based on triangulation. The vision system, which consists of a 32-bit CPU (68020) and 12M-byte image memory, has three main features: (1) 3-D measurement covers 2048-by-3076-pixel image formed in one image input sequence. (2) Measurement is fast: The system takes 12 seconds to produce data for an entire 6-million-pixel area. (3) The system processes range and intensity data simultaneously. The 256-height-level range image is used to determine an object's shape, and the 256-gray-level intensity image to determine the surface texture, markings, and other features. When used to inspect PC boards, the system detected missing, shifted, and floating components. The inspection resolution is 125 pm in along the X and Y axes and 30 lam along the Z axis.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/s0031-3203(00)00124-2
Segmentation based on fusion of range and intensity images using robust trimmed methods
  • Jul 6, 2001
  • Pattern Recognition
  • In Su Chang + 1 more

Segmentation based on fusion of range and intensity images using robust trimmed methods

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/ijcnn.2013.6706968
Biologically inspired intensity and range image feature extraction
  • Aug 1, 2013
  • D Kerr + 3 more

The recent development of low cost cameras that capture 3-dimensional images has changed the focus of computer vision research from using solely intensity images to the use of range images, or combinations of RGB, intensity and range images. The low cost and widespread availability of the hardware to capture these images has realised many possible applications in areas such as robotics, object recognition, surveillance, manipulation, navigation and interaction. Given the large volumes of data in range images, processing and extracting the relevant information from the images in real time becomes challenging. To achieve this, much research has been conducted in the area of bio-inspired feature extraction which aims to emulate the biological processes used to extract relevant features, reduce redundancy, and process images efficiently. Inspired by the behaviour of biological vision systems, an approach is presented for extracting important features from intensity and range images, using biologically inspired spiking neural networks in order to model aspects of the functional computational capabilities of the visual system.

  • Research Article
  • Cite Count Icon 6
  • 10.1006/cviu.1994.1035
Obtaining Generic Parts from Range Images Using a Multi-view Representation
  • Jul 1, 1994
  • Computer Vision and Image Understanding
  • N Raja

Obtaining Generic Parts from Range Images Using a Multi-view Representation

  • Research Article
  • Cite Count Icon 28
  • 10.1006/ciun.1994.1030
Obtaining Generic Parts from Range Images Using a Multi-view Representation
  • Jul 1, 1994
  • CVGIP: Image Understanding
  • N.S Raja + 1 more

Obtaining Generic Parts from Range Images Using a Multi-view Representation

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-4-431-66942-5_36
Fusion of Range Images and Intensity Images Measured from Multiple View Points
  • Jan 1, 1996
  • Kazunori Umeda + 1 more

This paper proposes methods to fuse range images and intensity images which are measured from multiple view points. Distributed sensing is a key technology for multiple robot system. As sensory information for the robot system, range image and intensity image are both useful and complementary, and thus fusion of the two images is thought to be effective. In this paper, each robot is assumed to have both range image sensor and intensity image sensor, and measures planar regions, 3D edges, cylindrical regions by fusing a range image and an intensity image. Methods to fuse such features which are measured from multiple view points by multiple robots are proposed. They are formulated by the least square approach, considering the errors of position and orientation of each robot and the errors of images. Experiments are performed to show the effectiveness of the proposed fusion methods.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/robot.1997.620030
3D shape recognition by distributed sensing of range images and intensity images
  • Apr 20, 1997
  • K Umeda + 2 more

This paper proposes methods for recognizing three dimensional (3D) shape with range images and intensity images which are measured by multiple robots. Distributed sensing is a key technology for multiple robot systems. As sensory information for the robot system, range images and intensity images are both useful and complementary, and thus fusion of the two images is thought to be effective. In this paper each robot is assumed to have a range image sensor and/or an intensity image sensor. Planar regions, 3D edges and cylindrical regions are extracted by the distributed sensing system as robust features for 3D shape recognition. Methods of the feature extraction which are based on sensor fusion technology, and a prototype of a model matching method with the features are proposed. Experiments are performed to show the effectiveness of the proposed methods of feature extraction and model matching.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icme.2012.167
Scene Segmentation and Pedestrian Classification from 3-D Range and Intensity Images
  • Jul 1, 2012
  • Xue Wei + 2 more

This paper proposes a new approach to classify obstacles using a time-of-flight camera, for applications in assistive navigation of the visually impaired. Combining range and intensity images enables fast and accurate object segmentation, and provides useful navigation cues such as distances to the nearby obstacles and obstacle types. In the proposed approach, a 3-D range image is first segmented using histogram thresholding and mean-shift grouping. Then Fourier and GIST descriptors are applied on each segmented object to extract shape and texture features. Finally, support vector machines are used to recognize the obstacles. This paper focuses on classifying pedestrian and non-pedestrian obstacles. Evaluated on an image data set acquired using a time-of-flight camera, the proposed approach achieves a classification rate of 99.5%.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-642-77225-2_26
Residual Analysis for Range Image Segmentation and Classification
  • Jan 1, 1992
  • Ezzet H. Al-Hujazi + 1 more

This paper presents an algorithm for the segmentation and classification of dense range images of industrial parts. Range images, are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The segmentation of images (range or intensity) is based on edge detection or region growing techniques. The approach presented in this paper segments range images by combining edge detection and region growing techniques. Jump and roof edges are detected using residual analysis. The residual is defined as the absolute value of the difference between the original image and a filtered version. We show that, at an edge, the difference after smoothing has a maxima in the direction perpendicular to the edge for jump and roof edges. The segmented surfaces is then classified into planar, convex, or concave. The classification is done in two steps. The first step utilizes a variation of the Wald-Wolfowitz runs test to classify the surfaces into planar or curved. The second step further classifies each curved surface into convex or concave using a multi-scale residual computation. The performance of the algorithm on a number of industrial parts range images is presented.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.