The construction of large-scale civil infrastructures requires massive spatiotemporal data to support the management and control of scheduling, quality control, and safety monitoring. Existing artificial-intelligence-based data processing algorithms rely heavily on experienced engineers to adjust the parameters of data processing, which is inefficient and time-consuming when dealing with huge datasets. Limited studies have compared the performance of different algorithms on a unified dataset. This study proposes a framework and evaluation system for comparing different data processing policies for processing huge spatiotemporal data in construction quality control. The proposed method compares the combination of multiple types of algorithms involved in the processing of massive point cloud data. The performance of data processing strategies is evaluated through this framework, and the optimal point cloud processing strategies are explored based on registration accuracy and data fidelity. Results show that a reasonable choice of combinations of point cloud sampling, filtering, and registration algorithms can significantly improve the efficiency of point cloud data processing and satisfy engineering demands for data accuracy and completeness. The proposed method can be applied to the civil engineering problem of processing a large amount of point cloud data and selecting the optimal processing method.