Abstract

We propose a membership inference attack against the neural classification algorithm from the draft national standard developed by the Omsk State Technical University under the auspices of the Technical Committee on Standardization «Artificial Intelligence» (TC 164). The attack allows us to determine whether the data were used for neural network training, and aimed at violating the confidentiality property of the training set. The results show that the protection mechanism of neural network classifiers described by the draft national standard does not provide the declared properties. The results were previously announced at Ruscrypto’2023 conference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call