Abstract

Growing AI technologies are a threat to safety and security in systems due to its obscurity and uncertainty. This study introduces a prevailing Deep Learning model, Convolutional Neural Network (CNN) and it’s deep weaknesses through a simple case study of the CNN model based on Keras for handwriting recognition. It reveals that CNN algorithms don’t adapt well to changes. Adding new cases to the training data may improve accuracy, but not to the same level as before. Synthetic training data may improve the accuracy superficially because of the similarity of data distributions between generated data and original data. Prevailing ML models such as Generative Adversarial Networks (GAN) have their limitations such as similarity-addiction and modality collapse. They could be toxic to safety engineering without domain expertise.The study proposed four test strategies: 1) AI systems should be tested by the third parties, not the developers; 2) test datasets should be categorically different from training datasets; the test data should not be a part of the training data; the test data should be collected from independent sources to increase the “diversity” of data modality; 3) avoid fake data, or simulated data; and 4) don’t collect the data that are conveniently available, but actively collect disastrous event data, unexpected, or the worst scenarios that may destroy the model. The study also introduces a multidimensional checklist for AI safety analysis, including sensors, data and environments, default and recovery mode, system architectures, and human-system interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call