Abstract

This study aims at applying the improved You Only Look Once V5s model for the assessment of regional poverty using remote sensing image target detection. The model was improved from structure, algorithm, and components. Objects in the remote sensing images were used to identify poverty, and the poverty alleviation situation could be predicted according to the existing detection results. The results showed that the values of Precision, Recall, mean Average Precision (mAP)@0.5, and mAP@0.5:0.95 of the model increased 7.3%, 0.7%, 1%, and 7.2%, respectively on the Common Objects in Context data set in the detection stage; the four values increased 3.1%, 2.2%, 1.3%, and 5.7%, respectively on the custom remote sensing image data set in the verification stage. The loss values decreased 2.6% and 37.4%, respectively, on the two data sets. Hence, the application of the improved model led to the more accurate detection of the targets. Compared with the other papers, the improved model in this paper proved to be better. Artificial poverty alleviation can be replaced by remote sensing image processing because it is inexpensive, efficient, accurate, objective, does not require data, and has the same evaluation effect. The proposed model can be considered as a promising approach in the assessment of regional poverty.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.