Abstract

Deep learning methods have become one of the fundamental blocks of high-throughput phenotyping using RGB imagery. In this study, we go beyond applying deep learning algorithms; we improve deep learning models using a multi-view fusion approach. The proposal dynamically merges information from two deep-learning models. We evaluate this approach to improve the estimation of total dry matter yield, leaf dry matter yield and total green matter yield of plots of Guineagrass, an important tropical forage species. The proposed approach, named Deep4Fusion fusion network, can be set to use two different deep learning models. The experimental results indicated that our approach improved the performance between 20% to 33% when compared with standard models reported in previous works, with a significant improvement (p-value < 0.05) for leaf dry matter and total dry matter yield. We believe that the flexibility of multi-view fusion in merging the predictions of several CNNs models through shared layers across the network has the potential to improve the results of many other single-view deep learning approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call