Abstract
This paper addresses the challenges of model evaluation and optimization that arise from domain differences between the target and source domains during model deployment. Current methods for model accuracy evaluation require a fully annotated test set. However, obtaining additional human labels for every unique application scenario can be costly and time-intensive. To tackle this problem, this paper proposes an instance segmentation model evaluation method based on domain differences, which can give the prediction accuracy of the model on unlabeled test sets. Moreover, to enhance deployment accuracy cost-effectively, this paper proposes an “effective operation”-based labeling cost computation method and a weighted uncertainty sample selection method. The former accurately computes labeling costs for instance segmentation, while the latter selects the most valuable samples from the target domain for labeling and training. Model evaluation experiments demonstrate that this method’s root mean square error (RMSE) on Cityscapes is approximately 4% less than that of other existing model evaluation methods. Model optimization experiments demonstrate that the proposed method achieves greater model accuracy than comparative methods under four distinct data partitioning protocols. The code is available at https://github.com/licongguan/Lamer.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.