Objective: to identify the legal problems of using artificial intelligence in hiring employees and the main directions of solving them.Methods: formal-legal analysis, comparative-legal analysis, legal forecasting, legal modeling, synthesis, induction, deduction.Results: a number of legal problems arising from the use of artificial intelligence in hiring were identified, among which are: protection of the applicant’s personal data, obtained with the use of artificial intelligence; discrimination and unjustified refusal to hire due to the bias of artificial intelligence algorithms; legal responsibility for the decision made by a generative algorithm during hiring. The author believes that for the optimal solution of these problems, it is necessary to look at the best practices of foreign countries, first of all, those which have adopted special laws on the regulation of artificial intelligence for hiring and developed guidelines for employers using generative algorithms for similar purposes. Also, the European Union’s and USA’s legislative work in the area of managing risks arising from the use of artificial intelligence should be taken into account.Scientific novelty: the article contains a comprehensive study of legal problems arising from the use of artificial intelligence in hiring and foreign experience in solving these problems, which allowed the author to develop recommendations to improve Russian legislation in this area. As for the problem of applicants’ personal data protection when using artificial intelligence for hiring, the author proposes to solve it by supplementing the labor legislation with norms that enshrine the requirements for transparency and consistency in the collection, processing and storage of information when using generative algorithms. The list and scope of personal data allowed for collection should be reflected in a special state standard. The solution to the problem of discrimination due to biased algorithms is seen in the mandatory certification and annual monitoring of artificial intelligence software for hiring, as well as the prohibition of scoring tools for evaluating applicants. The author adheres to the position that artificial intelligence cannot “decide the fate” of a job seeker: the responsibility for the decisions made by the algorithm is solely on the employer, including in cases of involving third parties for the selection of employees.Practical significance: the obtained results can be used to accelerate the development and adoption of legal norms, rules, tools and standards in the field of using artificial intelligence for hiring. The lack of adequate legal regulation in this area creates significant risks both for human rights and for the development of industries that use generative algorithms to hire employees.
Read full abstract