Abstract

One of the state-of-the-art computer vision applications is scene understanding and visual contextual awareness. Despite the numerous detection and classification-based studies, the literature lacks semantic segmentation methods for a more comprehensive and precise understanding of the soil included scene due to the scarcity of annotated datasets; the extracted information from an understood scene is worthwhile in project fleet management, claims management, equipment productivity analysis, safety, and soil classification. Hence, this study presents a vision-based approach for soil-included scene understanding and classifying them into different categories according to ASTM D2488, using semantic segmentation. An annotated dataset of various soil types containing 3043 images was developed to train four Deeplab v3+ variants with modified decoders. Five-fold cross-validation indicates the remarkable performance of the best variant with a mean Jaccard index of 0.9. The application and effects of subpixel upsampling and exit-flow CRF were also examined.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.