Abstract
Deep learning methods have become one of the fundamental blocks of high-throughput phenotyping using RGB imagery. In this study, we go beyond applying deep learning algorithms; we improve deep learning models using a multi-view fusion approach. The proposal dynamically merges information from two deep-learning models. We evaluate this approach to improve the estimation of total dry matter yield, leaf dry matter yield and total green matter yield of plots of Guineagrass, an important tropical forage species. The proposed approach, named Deep4Fusion fusion network, can be set to use two different deep learning models. The experimental results indicated that our approach improved the performance between 20% to 33% when compared with standard models reported in previous works, with a significant improvement (p-value < 0.05) for leaf dry matter and total dry matter yield. We believe that the flexibility of multi-view fusion in merging the predictions of several CNNs models through shared layers across the network has the potential to improve the results of many other single-view deep learning approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.