Abstract

Interpretation of the data acquired from guided-wave-based measurements often utilizes machine learning. However, creating effective machine learning models generally requires a significant amount of data - which in the case of guided waves are costly and time-consuming to acquire. This limitation significantly reduces the application perspective of many advanced machine learning algorithms, most notably deep learning. The problem of data scarcity has been partially addressed in the field of computer vision via the usage of generative adversarial neural networks. These generate synthetic data samples, matching the real data distribution. Aside from images, generative adversarial networks have also been applied to synthesize audio data - with recent advances going as far as successfully synthesizing human speech. These developments suggest that they may be applicable for generating guided waves data - as fundamentally the problem is in many ways similar to that presented by audio waves. This work explores the capabilities of generative adversarial neural networks in the area of guided-wave signal synthesis. The used database was acquired in a series of pitch-catch experiments in which various sensor locations were utilized, and is significantly extended both in terms of sensor locations and data available from each sensor pair. Lastly, the resultant synthesized data is evaluated by qualitative signal comparison.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call