Abstract
Abstract. Massive spatial data requires considerable computing power for real-time processing. With the help of the development of multicore technology and computer component cost reduction in recent years, high performance clusters become the only economically viable solution for this requirement. Massive spatial data processing demands heavy I/O operations however, and should be characterized as a data-intensive application. Data-intensive application parallelization strategies are imcompatible with currently available procssing frameworks, which are basically designed for traditional compute-intensive applications. In this paper we introduce a Split-and-Merge paradigm for spatial data processing and also propose a robust parallel framework in a cluster environment to support this paradigm. The Split-and-Merge paradigm efficiently exploits data parallelism for massive data processing. The proposed framework is based on the open-source TORQUE project and hosted on a multicore-enabled Linux cluster. One common LiDAR point cloud algorithm, Delaunay triangulation, was implemented on the proposed framework to evaluate its efficiency and scalability. Experimental results demonstrate that the system provides efficient performance speedup.
Highlights
1.1 IntroductionSpatial datasets in many fields, such as laser scanning, continue to increase with the improvements of data acquisition technologies
This paper proposes a general parallel framework on a high performance clusters (HPC) platform to facilitate this transition from a single-core personal computer (PC) to a HPC context
Jonker’s classification, the kernel of point operations focused on a single pixel or feature; while for local neighborhood operations the neighboring elements participate in current element processing. This characteristic provides a basis for LiDAR point cloud processing in a Split-and-Merge paradigm
Summary
Spatial datasets in many fields, such as laser scanning, continue to increase with the improvements of data acquisition technologies. The size of LiDAR point clouds has increased from gigabytes to terabytes, even to petabytes, requiring a significant number of computing resources to process them in a short time. This is definitely beyond the capability for a single desktop personal computer (PC). Processing massive LiDAR point cloud is inherently different from classical compute-intensive applications Such applications devote most of their processing time to Input/Ouput (I/O) and manipulation of input data. This paper proposes a general parallel framework on a HPC platform to facilitate this transition from a single-core PC to a HPC context This framework defines a Split-and-Merge programming paradigm for users/programmers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.