As AI has a significant impact on daily life, the economy, and industry, various problems are raised. Among them, special attention should be paid to the problem of unreasonable discrimination. If big data continues to expand and AI technology develops further in the future, and AI is used in all areas of life, the problem of discrimination will become a very serious social problem. As there is a high possibility that the problem of unreasonable discrimination due to the use of AI will surface, legal countermeasures must be carefully considered. So this paper examines legal measures to prevent and respond to the problem of unfair discrimination caused by the use of AI. We briefly summarize the meaning and types of discrimination, examine the causes of discrimination due to the use of AI, the risks of such discrimination, and the necessity and difficulty of legal regulation regarding this, and then based on this discussion, present legal measures to prevent and regulate discrimination.
 There is a high risk that AI will not only reflect the biases inherent in our society, but also strengthen and perpetuate them. On the other hand, due to the opacity and complexity unique to AI, it is difficult to recognize and correct discrimination and resolve disputes. Accordingly, we must recognize the seriousness of the risk that AI will further solidify prejudice and discrimination in our society, and prepare normative response measures that meet the characteristics of such risk.
 First of all, standards for equality and fairness that should be reflected in legal regulations must be established. Reasonable conclusions must be made about the meaning and content of discrimination, the values of non-discrimination and fairness that must be observed, standards for evaluating the legitimacy of discriminatory treatment, and proper indicators. Next, it is important to conduct impact assessment and inspect the risks of AI systems in advance and ensure transparency to prevent unreasonable discrimination. In addition, the entity responsible for unreasonable discrimination that occurs in various contexts must be clearly identified, and procedural rules regarding dispute resolution must be clarified. Basically, it is necessary to hold the user who adopts the AI system accountable and appoint a person responsible for preventing bias. Above this, the fact that AI is being used must be notified to the person being evaluated, and the input and output data of the algorithm must be disclosed afterwards. Also, expansion of public data disclosure and relaxation and transition of the burden of proof for unreasonable discrimination should also be considered.
Read full abstract