Abstract

BackgroundDeep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy.ResultsWe developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent.ConclusionsWe argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view.

Highlights

  • Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results

  • Performance of perspectives and combinations Classification accuracy for the single perspectives ranges between 77.4% and 88.2%

  • We found that combining multiple image perspectives depicting the same plant increases the reliability of identifying its species

Read more

Summary

Introduction

Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Plants form the basis for many terrestrial food webs and changes in plant composition are known to cascade up through the entire community [3, 4], affecting multiple ecosystem functions [5]. Monitoring and managing the presence or abundance of plant species is a key requirement of conservation biology and sustainable development, but depends on expert knowledge in terms of species identification. Well trained automated plant identification systems are Rzanny et al Plant Methods (2019) 15:77 considered to be comparable to human experts in labelling plants on images, given the limited amount of information present in the two dimensional images [11]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.