The accurate and fast assessment of plant diseases and fruits is important for sustainable and productive agriculture. However, typically manual methods are adopted for the assessment of plant diseases and fruit, which is a time-consuming, resource-intensive, and error-prone approach. A few artificial intelligence (AI)-based methods have been introduced to automate this process, but they show limitations in attaining high performance in challenging imaging conditions such as occlusions, poor lighting, noise, indistinctive boundaries, and excessive morphological variations. Moreover, current approaches also lack in providing computationally efficient solutions. To overcome these issues, two novel architectures are developed to segment plant diseases and fruits with higher segmentation performance and less computational requirements. The efficient feature fusion segmentation network (EFFS-Net) is the base network, and a multi-scale dilated feature fusion segmentation network (MDFS-Net) is the final network of this study. EFFS-Net uses an identity skip path-based feature fusion mechanism with an efficient grouped convolutional depth (EGCD) to provide satisfactory segmentation performance with high computational efficiency. MDFS-Net uses a multi-scale feature fusion mechanism and fuses low-level information in the EGCD and other sections of the network to learn detailed input features for delivering promising performance even in challenging imaging conditions. MDFS-Net also applies multiple receptive fields to the low-level information in the effective receptive field processing (ERFP) block for fusion near the pixel classification stage for further performance enhancement.Both networks are evaluated using a Brazilian Arabica coffee leaf (BRACOL) image dataset, an Australian Center for Field Robotics orchard fruit (apple) dataset, and a necrotized cassava root cross-section image dataset. The proposed method provides a promising segmentation performance, achieving dice similarity coefficients of 88.81 %, 95.01 %, and 86.04 % with performance improvement of 3.5 %, 2.6 %, and 1.07 % on the three datasets, respectively, with approximately ten times less number of required trainable parameters compared with the state-of-the-art methods.
Read full abstract