Abstract

The recent developments in artificial intelligence have the potential to facilitate new research methods in ecology. Especially Deep Convolutional Neural Networks (DCNNs) have been shown to outperform other approaches in automatic image analyses. Here we apply a DCNN to facilitate quantitative wood anatomical (QWA) analyses, where the main challenges reside in the detection of a high number of cells, in the intrinsic variability of wood anatomical features, and in the sample quality. To properly classify and interpret features within the images, DCNNs need to undergo a training stage. We performed the training with images from transversal wood anatomical sections, together with manually created optimal outputs of the target cell areas. The target species included an example for the most common wood anatomical structures: four conifer species; a diffuse-porous species, black alder (Alnus glutinosa L.); a diffuse to semi-diffuse-porous species, European beech (Fagus sylvatica L.); and a ring-porous species, sessile oak (Quercus petraea Liebl.). The DCNN was created in Python with Pytorch, and relies on a Mask-RCNN architecture. The developed algorithm detects and segments cells, and provides information on the measurement accuracy. To evaluate the performance of this tool we compared our Mask-RCNN outputs with U-Net, a model architecture employed in a similar study, and with ROXAS, a program based on traditional image analysis techniques. First, we evaluated how many target cells were correctly recognized. Next, we assessed the cell measurement accuracy by evaluating the number of pixels that were correctly assigned to each target cell. Overall, the “learning process” defining artificial intelligence plays a key role in overcoming the issues that are usually manually solved in QWA analyses. Mask-RCNN is the model that better detects which are the features characterizing a target cell when these issues occur. In general, U-Net did not attain the other algorithms’ performance, while ROXAS performed best for conifers, and Mask-RCNN showed the highest accuracy in detecting target cells and segmenting lumen areas of angiosperms. Our research demonstrates that future software tools for QWA analyses would greatly benefit from using DCNNs, saving time during the analysis phase, and providing a flexible approach that allows model retraining.

Highlights

  • In recent years, deep learning, as a subset of artificial intelligence, has proven to be the new key tool to investigate ecological research questions (Christin et al, 2019)

  • Conifer was the group showing the best result for all algorithms (0.99 for U-Net, 0.97 for Mask-RCNN, and ROXAS); while alder, beech, and oak vessels segmentation showed very similar results for the artificial intelligence algorithms (0.96, 0.91, and 0.93 for Mask RCNN, and 0.96, 0.95, and 0.92 for U-Net, respectively)

  • Since the U-Net architecture proved not to be meaningful in filtering target cells from non-target cells, we focused on the comparison between ROXAS and Mask-RCNN outputs (Supplementary Table 3)

Read more

Summary

Introduction

Deep learning, as a subset of artificial intelligence, has proven to be the new key tool to investigate ecological research questions (Christin et al, 2019). The step forward lies in the particular algorithm architecture, which de-structures data features through different evaluation layers This allows the machine to automatically change internal parameters and fit the computational process according to the required task (Zhang et al, 2016). Ecological investigations are enhanced by the flexibility of deep learning tools, especially when dealing with large and complex datasets (Christin et al, 2019) This is the case for image analysis tasks, where Deep Convolutional Neural Networks (DCNNs) stand out by performance (Krizhevsky et al, 2017). The different layers are composed by artificial neurons (Zhang et al, 2016) and each layer has a specific task, such as feature extraction, mathematical computation-based training, or dimensional adjustment, that makes DCNNs suitable for image interpretation (James and Bradshaw, 2020)

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.