Abstract
Stratigraphic archaeological excavations demand high-resolution documentation techniques for 3D recording. Today, this is typically accomplished using total stations or terrestrial laser scanners. This paper demonstrates the potential of another technique that is low-cost and easy to execute. It takes advantage of software using Structure from Motion (SfM) algorithms, which are known for their ability to reconstruct camera pose and threedimensional scene geometry (rendered as a sparse point cloud) from a series of overlapping photographs captured by a camera moving around the scene. When complemented by stereo matching algorithms, detailed 3D surface models can be built from such relatively oriented photo collections in a fully automated way. The absolute orientation of the model can be derived by the manual measurement of control points. The approach is extremely flexible and appropriate to deal with a wide variety of imagery, because this computer vision approach can also work with imagery resulting from a randomly moving camera (i.e. uncontrolled conditions) and calibrated optics are not a prerequisite. For a few years, these algorithms are embedded in several free and low-cost software packages. This paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera (even if the ima ges were not taken for this purpose). Archived data from previous excavations of VIAS-University of Vienna has been chosen and the derived digital surface models and orthophotos have been examined for their usefulness for archaeological applications. The a bsolute georeferencing of the resulting surface models was performed with the manual identification of fourteen control points. In order to express the positional accuracy of the generated 3D surface models, the NSSDA guidelines were applied. Simultaneously acquired terrestrial laser scanning data – which had been processed in our standard workflow – was used to independently check the results. The vertical accuracy of the surface models generated by SfM was found to be within 0.04 m at the 95 % confidence interval, whereas several visual assessments proved a very high horizontal positional accuracy as well.
Highlights
The process of archaeological excavation aims at a complete description of a site‟s unique stratification
To incorporate all possible uncertainties in the computed dataset, the final vertical accuracy values are expressed at the 95% confidence interval using the National Standard for Spatial Data Accuracy (NSSDA): 1.96 RMSEz [18]
The method is mainly based on several computer vision techniques and is very straightforward to execute and integrate in the general excavation methodology
Summary
The process of archaeological excavation aims at a complete description of a site‟s unique stratification. Terrestrial laser scanning (TLS) has been proposed as a sophisticated method to produce an accurate and detailed surface model [1,2,3] Due to their high acquisition costs, for the time being they are rarely applied at archaeological excavations. The research field of computer vision, having close ties to photogrammetry, is developing innovative algorithms and techniques to obtain 3D information from photographs in a simple and flexible way without many prerequisites These are embedded in several free and low-cost computer vision software packages, which allow an extremely flexible and appropriate approach to model surfaces from a wide variety of imagery. In order to assess the accuracy of the method, the 3D surface models are compared to surface models generated by simultaneously acquired TLS data
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.