Abstract
Ground cover and surface vegetation information are key inputs to wildfire propagation models and are important indicators of ecosystem health. Often these variables are approximated using visual estimation by trained professionals but the results are prone to bias and error. This study analyzed the viability of using nadir or downward photos from smartphones (iPhone 7) to provide quantitative ground cover and biomass loading estimates. Good correlations were found between field measured values and pixel counts from manually segmented photos delineating a pre-defined set of 10 discrete cover types. Although promising, segmenting photos manually was labor intensive and therefore costly. We explored the viability of using a trained deep convolutional neural network (DCNN) to perform image segmentation automatically. The DCNN was able to segment nadir images with 95% accuracy when compared with manually delineated photos. To validate the flexibility and robustness of the automated image segmentation algorithm, we applied it to an independent dataset of nadir photographs captured at a different study site with similar surface vegetation characteristics to the training site with promising results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.