Abstract

ABSTRACT Since all information depends solely on the training data, machine learning algorithms typically do not employ external knowledge or other experiences during the learning process. Methods for machine learning have been rigorously tested against novel varieties of highly technical “black box” or “white box” adversarial attacks. By employing attacks, attackers can change systems to serve a harmful end goal. When authorized implementers and eavesdroppers are geographically close together, it is difficult to perform secure beamforming in waveform applications, for instance, leading to erroneous beam forms and, as a result, disastrous beam leakages. As a result, the first move in a prospective black-box offense will be based on the waveform features of a learning signal. By including a non-orthogonality concept into the physical layer signal waveform, the Waveforms Eavesdropping Prevention Framework (WEPF) proposed in this work aims to boost machine learning security to address these difficulties. The implementation scenario is based on a waveforms scenario used to categorize the Electrical Penetration Graph (EPG) for insects, a crucial tool for researching the feeding conduct of piercing-sucking insects and the transition mechanism between viruses and insects. An attribute vector with six dimensions, consisting of low-frequency wavelet energy (LFWE) in the second and third layers of the Wavelet Kernel Extreme Learning Machine, fractal box dimension (FBD), the Hurst exponent (HE), and spectral centroid (SC) in the first two layers of the HHT, was used to test the proposed framework. Two adversarial scenarios were explored. However, the suggested architecture secures all waveform signals, demonstrating the method’s effectiveness in lowering the risk of eavesdropping or tampering with the waveforms used in advanced machine-learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call