Abstract

The use of consumer grade unmanned aerial vehicles (UAV) is becoming more and more ubiquitous in photogrammetric applications. A large proportion of consumer grade UAVs are equipped with CMOS image sensor and rolling shutter. When imaging with a rolling shutter camera, the image sensor is exposed line by line, which can introduce additional distortions in image space since the UAV navigates at a relatively high speed during aerial acquisitions. In this paper, we propose (1) an approach to calibrate the readout time of rolling shutter camera, (2) a two-step method to correct the image distortion introduced by this effect. The two-step method makes assumption that during exposure, the change of camera orientation is negligible with respect to the change of camera position, which is often the case when camera is fixed on a stabilized mount. Firstly, the camera velocity is estimated from the results of an initial bundle block adjustment; then, one camera pose per scan-line of the image sensor is recovered and image observations are corrected. To evaluate the performance of the proposed method, four datasets of block and corridor configurations are acquired with the DJI Mavic 2 Pro and its original Hasselbald L1D-20c camera. The proposed method is implemented in MicMac, a free, open-source photogrammetric software; comparisons are carried out with other two mainstream software, AgiSoft MetaShape and Pix4D, which also have the functionality of rolling shutter effect correction. For block configuration datasets, the three software give comparable results. AgiSoft Metashape and Pix4D are sensitive to the flight configuration and encounter difficulties when processing datasets in corridor configurations. The proposed method shows good robustness both in block and corridor configurations, and is the only method that works in corridor configuration. After the application of the rolling shutter effect correction, the 3D accuracy is improved by 30–60% in block configuration and 15–25% in corridor configuration. A further improvement can be expected if a precise dating of image is available or if the camera positions can be directly extracted from GNSS data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call