Abstract

This paper addresses the challenges of model evaluation and optimization that arise from domain differences between the target and source domains during model deployment. Current methods for model accuracy evaluation require a fully annotated test set. However, obtaining additional human labels for every unique application scenario can be costly and time-intensive. To tackle this problem, this paper proposes an instance segmentation model evaluation method based on domain differences, which can give the prediction accuracy of the model on unlabeled test sets. Moreover, to enhance deployment accuracy cost-effectively, this paper proposes an “effective operation”-based labeling cost computation method and a weighted uncertainty sample selection method. The former accurately computes labeling costs for instance segmentation, while the latter selects the most valuable samples from the target domain for labeling and training. Model evaluation experiments demonstrate that this method’s root mean square error (RMSE) on Cityscapes is approximately 4% less than that of other existing model evaluation methods. Model optimization experiments demonstrate that the proposed method achieves greater model accuracy than comparative methods under four distinct data partitioning protocols. The code is available at https://github.com/licongguan/Lamer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call