Abstract

In current plant phenotyping research, the study of plant time-series images based on deep learning has received widespread attention. While such image data is relatively easy to obtain, the cost of annotation is high. One efficient method for achieving cost-effective training is through contrastive learning. Plant growth is slow, and the changes in image sequences over a period of time are small, with simple semantic information. Previous contrastive pre-training models struggled to effectively distinguish positive samples from the same image with different augmented views and similar negative samples from different images. Therefore, this paper proposes a method called self-supervised contrastive learning method for plant time-series images with a Priori Distance Embedding (PDE). The semantic information in images corresponding to different phenological stages of plants varies. This method transforms this crucial domain knowledge into prior distances for image pairs and conducts contrastive learning pre-training. The learned weights can be transferred to downstream tasks. Building upon this method, experiments were conducted on cherry time-series images to assess the quality of pre-training through a plant phenotyping image semantic segmentation task. To provide a comprehensive example of plant time-series image phenotypic analysis, this paper establishes a cherry growth temporal model, specifically including PDE pre-training, anomaly detection, semantic segmentation, and recording the results from the temporal dimension. The experiments indicate that this self-supervised contrastive learning method can be effectively applied to the pre-training of plant time-series images, demonstrating broad applicability in various computer vision studies related to plant phenotyping.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call