Abstract

Digital image correlation (DIC) is a widely used technique for non-contact measurement of deformation. However, traditional DIC methods face challenges in balancing calculation efficiency and the quantity of seed points. Deep learning approaches, particularly supervised learning methods, have shown promise in improving DIC efficiency. However, these methods require high-quality training data, which can be time-consuming to generate ground truth annotations. To address these challenges, we propose an unsupervised convolutional neural network (CNN) based DIC method for 2D displacement measurement. Our approach leverages an encoder-decoder architecture with multi-level feature extraction, a dual-path correlation block, and an attention block to extract informative features from speckle images with varying characteristics. We utilize a speckle image warp model to transform the deformed speckle image to the predicted reference speckle image based on the predicted 2D displacement map. The unsupervised training is achieved by comparing the predicted and original reference speckle images. To optimize the network's parameters, we employ a composite loss function that takes into account both the Mean Squared Error (MSE) and Pearson correlation coefficient. By using unsupervised convolutional neural network (CNN) based DIC method, we eliminate the need for extensive training data annotation, which is a time-consuming process in supervised learning DIC methods. We have conducted several experiments to demonstrate the validity and robustness of our proposed method. The results show a significant reduction in Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) compared to a method proposed by Zhao et al. This indicates that our unsupervised CNN-based DIC approach can achieve accuracy comparable to supervised CNN-based DIC methods. For implementation and evaluation, we provide PyTorch code and datasets, which will be released at the following URL :https://github.com/fead1/DICNet-corr-unsupervised-learning-.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.