Abstract

Artificial intelligence (AI) has a limitation in that it is only in the passive cognition area, so its operating process is not transparent; therefore, the technology relies on learning data. Since raw data for AI learning are processed and inspected manually to assure high quality for sophisticated AI learning, human errors are inevitable, and damaged and incomplete data and differences from the original data may lead to unexpected outputs of AI learning for which processed data are used. In this context, this research examines cases where AI learning data were inaccurate, in terms of cybersecurity, and the need for learning data management before machine learning through analysis of cybersecurity attack techniques, and we propose the direction of establishing a data-preserving AI system, which is a blockchain-based learning data environment model to verify the integrity of learning data. The data-preserving AI learning environment model is expected to prevent cyberattacks and data deterioration that may occur when data are provided and utilized in an open network for the processing and collection of raw data.

Highlights

  • With machine learning and deep learning technologies, artificial intelligence (AI) has been developed at a fast pace to the extent that it can be used commercially, and it has been leading innovation in various fields, including the medical, finance, robot, and culture sectors

  • Kim (2019) made blocks directly collect data in parallel in the blockchain structure, compared data collected by each block with data of other blocks to sort out high-quality data only, and eventually data collected by each block with data of other blocks to sort out high-quality data only, and established a learning dataset with data selected through comparison [7]

  • AIlearning learningdata dataproductivity productivityimprovement improvement system and method, which is based on labeled data system and method, which is based on labeled data management using blockchain

Read more

Summary

Introduction

With machine learning and deep learning technologies, artificial intelligence (AI) has been developed at a fast pace to the extent that it can be used commercially, and it has been leading innovation in various fields, including the medical, finance, robot, and culture sectors. Deep learning has a problem in that transparency of its operating process is not guaranteed due to such functions as the black box of an artificial neural network. To solve this reliability issue, relevant policy and technology are required. It is necessary to conduct technology development for the AI system itself and to minimize errors, while adopting a structure where malicious attacks can be defended against In this context, this research examines the need to manage learning data before machine learning by analyzing cases where inaccurate AI learning data were used and cybersecurity attacking methods in terms of cybersecurity, to improve the reliability of AI. We intend to propose the direction of establishing a data-preserving AI system, which is a blockchain-based learning data environment model for the verification of learning data integrity

AI Cyberthreats
Trends in Related Research
Method
Requirements for AI Learning Data
Attack Method
Comparative Analysis with the Existing Research
AI Learning Environment Module Case Study
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call