Abstract
Abstract Visible traits can be criteria for selecting a suitable crop. Three-dimensional (3D)-scanned plant models can be used to extract visible traits; however, collecting scanned data and physically manipulating point-cloud structures of the scanned models are difficult. Recently, deep generative models have shown high performance in learning and creating target data. Deep generative models can improve the versatility of scanned models. The objectives of this study were to generate sweet pepper (Capsicum annuum) leaf models and to extract their traits by using deep generative models. The leaves were scanned, preprocessed and used to train the deep generative models. The variational autoencoder, generative adversarial network (GAN) and latent space GAN were used to generate the desired leaves. The optimal number of latent variables in the model was selected via the Jensen–Shannon divergence (JSD). The generated leaves were evaluated by using the JSD, coverage and minimum matching distance to determine the best model for leaf generation. Among the deep generative models, a modified GAN showed the highest performance. Sweet pepper leaves with various shapes were generated from eight latent variables following a normal distribution, and the morphological traits of the leaves were controlled through linear interpolation and simple arithmetic operations in latent space. Simple arithmetic operations and gradual changes in the latent space modified the leaf traits. Deep generative models can parameterize and generate morphological traits in digitized 3D plant models and add realism and diversity to plant phenotyping studies.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have