Abstract

<abstract> High-throughput plant phenotyping systems capable of producing large numbers of images have been constructed in recent years. In order for statistical analysis of plant traits to be possible, image processing must take place. This paper considers the extraction of plant trait data from soybean images taken in the University of Nebraska-Lincoln Greenhouse Innovation Center. Using transfer learning, which utilizes the VGG16 model along with its parameters in the convolutional layers as part of our model, convolutional neural networks (CNNs) are trained to predict measurements such as height, width, and size of the plants. It is demonstrated that, by making use of transfer learning, our CNNs efficiently and accurately extract the trait measurements from the images using a relatively small amount of training data. This approach to plant trait extraction is new to the field of plant phenomics, and the superiority of our CNN-based trait extraction approach to an image segmentation-based approach is demonstrated. </abstract>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call