Abstract

Shadow, a natural phenomenon resulting from the absence of light, plays a pivotal role in agriculture, particularly in processes such as photosynthesis in plants. Despite the availability of generic shadow datasets, many suffer from annotation errors and lack detailed representations of agricultural shadows with possible human activity inside, excluding those derived from satellite or drone views. In this paper, we present an evaluation of a synthetically generated top-down shadow segmentation dataset characterized by photorealistic rendering and accurate shadow masks. We aim to determine its efficacy compared to real-world datasets and assess how factors such as annotation quality and image domain influence neural network model training. To establish a baseline, we trained numerous baseline architectures and subsequently explored transfer learning using various freely available shadow datasets. We further evaluated the out-of-domain performance compared to the training set of other shadow datasets. Our findings suggest that AgroSegNet demonstrates competitive performance and is effective for transfer learning, particularly in domains similar to agriculture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call