Abstract

Now, the core of the discussion is how artificial intelligence (AI), as a rapidly developing advanced technology, can solve many of the most difficult problems facing humanity beyond solving severe social problems. Therefore, the true value of artificial intelligence (AI) as a cutting-edge technology is to solve huge problems facing humanity, such as social problems such as gun violence, food shortages, incurable diseases, and global warming caused by carbon emissions, and to contribute to human development. The concern, though, is that no one knows exactly how current artificial intelligence (AI) works. In other words, it is similar to developing a bomb that does not know when it will explode without a safety device.
 Already, artificial intelligence (AI) is causing damage by providing false information. So, if we don't know how to control better artificial intelligence (AI) when it appears, dangerous things can happen. Therefore, there will be a need for international norms on artificial intelligence (AI), which will be made possible by international treaties. The European Union has already enacted the AI Act as the first regulatory law on AI. This contains stronger regulations on the AI industry than in the United States or Asia. In particular, the Act stipulates the obligation to display AI-generated content. Thus, the EU's AI regulation law can be evaluated as legislation designed to ensure the safety of AI-generated products, such as cars and toys. In addition, in September 2022, the European Union (EU) issued a directive on AI accountability. It focuses on the legal principles of compensation for damages caused by artificial intelligence (AI). 
 Legislation on punitive damages for deepfake harms is required. As a civil liability, the victim's damages should be recognized not only for mental damages, but also for aggravated damages for monetary damages, if any. As such a legislative measure, so-called punitive damages should be introduced for malicious and deliberate use of artificial intelligence (AI) as a perpetrator. In the so-called risk liability of artificial intelligence (AI), the operator of artificial intelligence (AI), which has enormous profits through the operation of the risk source, will have to bear the risk liability as a no-fault liability. Korea's product liability law has embraced this principle of risk liability. In order for product liability law to apply to artificial intelligence (AI), there must be a defect in the artificial intelligence (AI) installed in the product.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call