Abstract

The development of big data analysis and deep learning technologies triggered by the 4th industrial revolution created an opportunity to expand the scope of application and application of artificial intelligence to our lives. Beyond the level where conventional artificial intelligence was merely a means of assisting human decision-making or action, it has reached a stage where it analyzes vast amounts of data to identify problems on its own, create optimal answers, and even self-examine errors. A representative example is the Chat GPT service. Recently, however, copyright infringement disputes have been reported in this regard. For example, in January 2023, Getty Images filed a copyright infringement lawsuit against Stability AI, the developer of 'Stable Diffusion', and in April 2023, Twitter CEO Elon Musk also announced its intention to file a copyright infringement lawsuit against Microsoft (MS). In this regard, in this paper, first, as an issue derived from data use, chat GPT does not disclose how to learn data, so it is difficult to confirm whether scrolling and scraping are prohibited information has been collected, and chat GPT service is fair. It was pointed out that there are elements that are difficult to see as use, that the scope of TDM exemption is different for each country, and that TDM exemption regulations have not yet been introduced in Korea. Then, as a way to determine and protect copyright infringement of chat GPT products, first, in relation to authors and rights attribution, the contribution of artificial intelligence and humans to creation is divided and marked by stages (by year), and rights attribution must be different accordingly, second, in relation to the judgment of substantial similarity, refer to the criteria for functional works, but in the case of artificial intelligence products, there is a limit to determining whether or not copyright infringement is caused only by human feelings, so plagiarism verification or copyright infringement prevention programs must be developed and applied. Third, in relation to copyright protection institutions, it is appropriate to stipulate the protection period of artificial intelligence products at 5 years, but a two-dimensional approach must be taken so as not to reduce the incentive for human creation. Fourth, in relation to civil and criminal legal responsibilities, the Product Liability Act Referring to the provisions of the Senate Civil Act, it was proposed that artificial intelligence developers and users jointly and jointly take responsibility in principle, but that responsibilities should be assigned differentially depending on whether there was a violation of the duty of care or negligence. Based on the contents examined in this paper, it is necessary to actively seek rational and systematic ways to regulate generative artificial intelligence services.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call