Abstract

Percent vegetation cover is important variable used in understanding ecosystem processes, vegetation health and productivity. Downward looking images captured using a handheld camera have been demonstrated as a viable option for rapidly capturing in situ information to assess vegetation cover. This technique, however, is prone to perspective distortions biasing cover estimates towards taller vegetation elements. In this paper we present a new approach to generate imagery for use in vegetation cover estimation utilising multiple overlapping photographs and structure from motion algorithms to produce a 3D point cloud representation of the target plot. This point cloud is then converted into an orthoimage consisting of four bands—red, green, blue and vegetation height—which is free from perspective distortions. The approach is trialled in two Eucalypt forests in South Eastern Australia to produce an estimate of change in cover of all vegetation elements following a prescribed burn. Orthoimages are generated with 2.5 mm resolution and classified using object-based image analysis and random forests into broad vegetation and fuel classes. Utilising this approach an overall classification accuracy of 81% is achieved with the resulting estimates of cover agreeing with visual point based interpretation to within 6% across all classes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.