Abstract

End-to-end roughness measurement can be achieved through the self-extraction of grinding surface features, which can be achieved through deep learning. However, due to the grinding surface texture being random, the features are weak, the self-extracted grinding surface features of the same surface under different lighting environments are different, and the training data and the test data when the lighting environments are inconsistent with the recognition of the measurement of the precision of the lower. To tackle these problems, this paper proposes an adversarial domain self-adaptation (NMDANN) based visual measurement method for grinding surface roughness under variable illumination. An improved residual network is used as a generator to extract more effective metastable features, and multi-head attention is introduced into the domain discriminator to enhance its domain adaptive capability. The experimental results show that the method can achieve an average recognition precision of 96.9112% for different grades of roughness on the grinding surface under the changing light environment, which is 40.1360% higher than the ordinary classification model ResNet50 and 10.1626% higher than the DANN model with migration capability. It lays the foundation for the online visual measurement of roughness on the grinding surface under the variable light environment. This lays the foundation for the online visualization of grinding surface roughness measurement in variable light environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call