Employer as an AI System Operator and Tortious Liability for Damage Caused by AI Systems: European and US Perspectives

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Abstract The article examines if the standard of protecting parties injured by artificial intelligence (AI) systems used by professional operators is high in the European Union (EU) as compared to the USA—that is, whether the liability model of an operator, as applicable in the EU, ensures that injured parties have effective protection. For the purposes of this article, the term ‘effective protection’ should be understood as referring to a liability model guaranteeing to injured parties a real possibility of obtaining due compensation without the burden of proof that a claimant cannot fulfil. The EU legislature has proposed amendments in the area of non-contractual civil liability of AI system users, which also includes professional operators of such systems. An employer using an AI system for professional or commercial purposes and deriving profits from such use qualifies, in a legal sense, as a professional operator of such a system. Therefore, it is worth considering the question whether the EU, compared to the USA, is really in the vanguard when it comes to drafting legislation tailored to the challenges of AI technology and if the currently proposed legislative amendments ensure effective legal protection to persons injured by AI systems used by a professional operator.

Similar Papers
  • Discussion
  • Cite Count Icon 11
  • 10.1016/s2589-7500(22)00094-2
Artificial intelligence to complement rather than replace radiologists in breast screening
  • Jun 21, 2022
  • The Lancet Digital Health
  • Sian Taylor-Phillips + 1 more

Artificial intelligence to complement rather than replace radiologists in breast screening

  • Research Article
  • Cite Count Icon 40
  • 10.2139/ssrn.2957722
Generating Rembrandt: Artificial Intelligence, Accountability and Copyright - The Human-Like Workers Are Already Here - A New Model
  • Apr 25, 2017
  • SSRN Electronic Journal
  • Shlomit Yanisky-Ravid + 1 more

Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI "Multi-Player" paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems.

  • Research Article
  • Cite Count Icon 4
  • 10.46610/rtaia.2024.v03i01.001
Human-Computer Interaction Techniques for Explainable Artificial Intelligence Systems
  • Mar 26, 2024
  • Research & Review: Machine Learning and Cloud Computing
  • S Tharun Anand Reddy

As Artificial Intelligence (AI) systems become more widespread, there is a growing need for transparency to ensure human understanding and oversight. This is where Explainable AI (XAI) comes in to make AI systems more transparent and interpretable. However, developing adequate explanations is still an open research problem. Human-Computer Interaction (HCI) is significant in designing interfaces for explainable AI. This article reviews the HCI techniques that can be used for solvable AI systems. The literature was explored with a focus on papers at the intersection of HCI and XAI. Essential techniques include interactive visualizations, natural language explanations, conversational agents, mixed-initiative systems, and model introspection methods while Explainable AI presents opportunities to improve system transparency, it also comes with risks, especially if the explanations need to be designed carefully. To ensure that explanations are tailored for diverse users, contexts, and AI applications, HCI principles and participatory design approaches can be utilized. Therefore, this article concludes with recommendations for developing human-centred XAI systems, which can be achieved through interdisciplinary collaboration between HCI and AI. As Artificial Intelligence (AI) systems become more common in our daily lives, the need for transparency in these systems is becoming increasingly important. Ensuring that humans clearly understand how AI systems work and can oversee their functioning is crucial. This is where the concept of Explainable AI (XAI) comes in to make AI systems more transparent and interpretable. However, developing adequate explanations for AI systems is still an open research problem. In this context, Human-Computer Interaction (HCI) is significant in designing interfaces for explainable AI. By integrating HCI principles, we can create systems humans understand and operate more efficiently. This article reviews the HCI techniques that can be used for solvable AI systems. The literature was explored with a focus on papers at the intersection of HCI and XAI. The essential methods identified include interactive visualizations, natural language explanations, conversational agents, mixed-initiative systems, and model introspection methods. Each of these techniques has unique advantages and can be used to provide explanations for different types of AI systems. While Explainable AI presents opportunities to improve system transparency, it also comes with risks, especially if the explanations need to be designed carefully. There is a risk of oversimplification, leading to misunderstanding or mistrust of the AI system. It is essential to employ HCI principles and participatory design approaches to ensure that explanations are tailored for diverse users, contexts, and AI applications. By developing human-centred XAI systems, we can ensure that AI systems are transparent, interpretable, and trustworthy. This can be achieved through interdisciplinary collaboration between HCI and AI. The recommendations in this article provide a starting point for designing such systems. In essence, XAI presents a significant opportunity to improve the transparency of AI systems, but it requires careful design and implementation to be effective.

  • News Article
  • Cite Count Icon 20
  • 10.1016/s2589-7500(19)30011-1
Is the future of medical diagnosis in computer algorithms?
  • May 1, 2019
  • The Lancet Digital Health
  • Karl Gruber

Is the future of medical diagnosis in computer algorithms?

  • Research Article
  • Cite Count Icon 26
  • 10.1365/s43439-023-00107-9
Blockchain for Artificial Intelligence (AI): enhancing compliance with the EU AI Act through distributed ledger technology. A cybersecurity perspective
  • Jan 25, 2024
  • International Cybersecurity Law Review
  • Simona Ramos + 1 more

The article aims to investigate the potential of blockchain technology in mitigating certain cybersecurity risks associated with artificial intelligence (AI) systems. Aligned with ongoing regulatory deliberations within the European Union (EU) and the escalating demand for more resilient cybersecurity measures within the realm of AI, our analysis focuses on specific requirements outlined in the proposed AI Act. We argue that by leveraging blockchain technology, AI systems can align with some of the requirements in the AI Act, specifically relating to data governance, record-keeping, transparency and access control. The study shows how blockchain can successfully address certain attack vectors related to AI systems, such as data poisoning in trained AI models and data sets. Likewise, the article explores how specific parameters can be incorporated to restrict access to critical AI systems, with private keys enforcing these conditions through tamper-proof infrastructure. Additionally, the article analyses how blockchain can facilitate independent audits and verification of AI system behaviour. Overall, this article sheds light on the potential of blockchain technology in fortifying high-risk AI systems against cyber risks, contributing to the advancement of secure and trustworthy AI deployments. By providing an interdisciplinary perspective of cybersecurity in the AI domain, we aim to bridge the gap that exists between legal and technical research, supporting policy makers in their regulatory decisions concerning AI cyber risk management.

  • Research Article
  • Cite Count Icon 5
  • 10.17072/1995-4190-2022-58-683-708
ГРАЖДАНСКО-ПРАВОВАЯ ОТВЕТСТВЕННОСТЬ ПРИ РАЗРАБОТКЕ И ПРИМЕНЕНИИ СИСТЕМ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА И РОБОТОТЕХНИКИ: ОСНОВНЫЕ ПОДХОДЫ
  • Jan 1, 2022
  • Вестник Пермского университета. Юридические науки
  • Yu S Kharitonova + 2 more

Introduction: when studying legal issues related to safety and adequacy in the application of artificial intelligence systems (AIS), it is impossible not to raise the subject of liability accompanying the use of AIS. In this paper we focus on the study of the civil law aspects of liability for harm caused by artificial intelligence and robotic systems. Technological progress necessitates revision of many legislative mechanisms in such a way as to maintain and encourage further development of innovative industries while ensuring safety in the application of artificial intelligence. It is essential not only to respond to the challenges of the moment but also to look forward and develop new rules based on short-term forecasts. There is no longer any reason to claim categorically that the rules governing the institute of legal responsibility will definitely not require fundamental changes, contrary to earlier belief. This is due to the growing autonomy of AIS and the expansion of the range of their possible applications. Artificial intelligence is routinely employed in creative industries, decision-making in different fields of human activity, unmanned transportation, etc. However, there remain unresolved major issues concerning the parties liable in the case of infliction of harm by AIS, the viability of applying no-fault liability mechanisms, the appropriate levels of regulation of such relations; and discussions over these issues are far from being over. Purpose: basing on an analysis of theoretical concepts and legislation in both Russia and other countries, to develop a vision of civil law regulation and tort liability in cases when artificial intelligence is used. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods: legal-dogmatic and the method of interpretation of legal norms. Results: there is considerable debate over the responsibilities of AIS owners and users. In many countries, codes of ethics for artificial intelligence are accepted. However, what is required is legal regulation, for instance, considering an AIS as a source of increased danger; in the absence of relevant legal standards, it is reasonable to use a tort liability mechanism based on analogy of the law. Standardization in this area (standardization of databases, software, infrastructure, etc.) is also important – for identifying the AIS developers and operators to be held accountable; violation of standardization requirements may also be a ground for holding them liable under civil law. There appear new dimensions added to the classic legal notions such as the subject of harm, object of harm, and the party that has inflicted the harm, used with regard to both contractual and non-contractual liability. Conclusions: the research has shown that legislation of different countries currently provides soft regulation with regard to liability for harm caused by AIS. However, it is time to gradually move from the development of strategies to practical steps toward the creation of effective mechanisms aimed at minimizing the risks of harm without any persons held liable. Since the process of developing AIS involves many participants with an independent legal status (data supplier, developer, manufacturer, programmer, designer, user), it is rather difficult to establish the liable party in case something goes wrong, and many factors must be taken into account. Regarding infliction of harm to third parties, it seems logical and reasonable to treat an AIS as a source of increased danger; and in the absence of relevant legal regulations, it would be reasonable to use a tort liability mechanism by analogy of the law. The model of contractual liability requires the development of common approaches to defining the product and the consequences of violation of the terms of the contract.

  • Research Article
  • Cite Count Icon 48
  • 10.1016/j.fertnstert.2020.10.040
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
  • Nov 1, 2020
  • Fertility and Sterility
  • Carol Lynn Curchoe + 18 more

Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?

  • Research Article
  • 10.51799/2763-8685v2n2013
Law and policies related to works generated by artificial intelligence in Brazil and the European Union
  • Jan 1, 2022
  • Latin American Journal of European Studies
  • Sofia Sulzbach

Investments in sophisticated technologies have enabled the creation of Artificial Intelligence (AI) systems capable of reproducing the behaviour of the human brain, with the ability to learn, decide and even create intellectual works. This technological reality drew the attention of the European Parliament and the Council of the European Union (EU), which began to pressure the development of research on AI and Intellectual Property (IP), to identify the best legal solutions to the factual situation of works generated by non-human agents. Responding to this call of the Parliament, this monograph proposes to investigate the feasibility of protection of intellectual works generated by AI systems in Brazil and the EU, based on current legislation, jurisprudence, and doctrine. To this end, the monograph is based on the hypothetical-deductive method, starting from a comparative approach, and divided into three main parts. The first part is destined to present the fundamental notions of AI and copyright. After this introduction, the second part assesses whether works generated by AI systems qualify as intellectual property subject to copyright protection, based on the case study of the painting The Next Rembrandt, identifying the issues related to the attribution of rights to human and non-human agents involved in the creation process. Finally, the third part is destined to examine the legislative proposals and governmental solutions suggested in Brazil and in the EU on the matter. Based on the acknowledgement of the insufficiency of traditional provisions to protect works generated by AI systems, it is concluded that the European Intellectual Property Office's proposal for the elaboration of a sui generis system seems to be the most adequate solution to protect works generated by AI in Brazil and in the EU.

  • Research Article
  • Cite Count Icon 134
  • 10.1016/j.isci.2020.101515
Who Gets Credit for AI-Generated Art?
  • Aug 29, 2020
  • iScience
  • Ziv Epstein + 3 more

SummaryThe recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI—as a tool versus agent—with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.

  • Research Article
  • 10.1200/jco.2025.43.16_suppl.e13650
Comparative analysis of deep learning model artificial intelligence and radiologists in breast tumor classification: A study in Uzbekistan.
  • Jun 1, 2025
  • Journal of Clinical Oncology
  • Umid Tokhtamuratov + 5 more

e13650 Background: To evaluate and compare the diagnostic performance of a deep learning-based artificial intelligence (AI) system versus three radiologists in the detection of breast cancer using digital mammography, specifically within the context of Uzbekistan, and to determine if AI can serve as a reliable tool in this setting. Methods: This retrospective study utilized a dataset of mammograms, sourced from Uzbekistan, which were independently assessed by three radiologists and an AI system. The AI model, based on deep neural networks, was designed for automated breast cancer detection. The radiologists’ interpretations and the AI predictions were compared against a reference standard of biopsy results. The primary outcome measures included the area under the receiver operating characteristic curve (AUC), accuracy, and specificity for both the AI system and radiologists. The data underwent rigorous statistical analysis to establish the significance of the observed differences. The model was trained using data from multiple institutions in multiple countries. Results: The AI system demonstrated a significantly higher area under the curve (AUC of 0.89) compared to the average of three radiologists (AUC of 0.82). The AI also showed higher specificity (e.g., 93.0% versus 77.6%), and the recall rate for AI was three times lower than that of radiologists. The AI was more sensitive in detecting cancers with mass, distortion, or asymmetry and better at detecting T1 or node-negative cancers. This result underscores AI's potential to reduce false positives, but also demonstrates that it can detect cancers missed by radiologists. The AI system's performance aligns with other studies showing AI sensitivity to be non-inferior to, or surpassing, radiologists. AI systems can detect more cancers with mass or distortion than radiologists. The statistical analysis showed that the AI system achieved robust accuracy and demonstrated potential as a reliable tool to enhance breast cancer screening outcomes. A study also showed that AI can reduce the number of reads in a screening program by 41.4%. Conclusions: In this study the AI system outperformed the group of radiologists in terms of AUC, specificity, recall rates, and positive predictive value. These findings suggest that deep learning-based AI can significantly improve the detection of breast cancer in mammography and may serve as a valuable tool in the Uzbekistan healthcare setting. Additional studies that include larger, more heterogenous datasets are warranted and it is important to continue researching AI integration, including risk management and real-world follow up of performance. Future studies should examine the impact of AI on screening performance when used by radiologists and assess the value of different models for various conditions.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.bone.2024.117321
Utilizing artificial intelligence to determine bone mineral density using spectral CT
  • Nov 6, 2024
  • Bone
  • Yali Li + 7 more

BackgroundDual-energy computed tomography (DECT) has demonstrated the feasibility of using HAP-water to respond to BMD changes without requiring dedicated software or calibration. Artificial intelligence (AI) has been utilized for diagnosising osteoporosis in routine CT scans but has rarely been used in DECT. This study investigated the diagnostic performance of an AI system for osteoporosis screening using DECT images with reference quantitative CT (QCT). MethodsThis prospective study included 120 patients who underwent DECT and QCT scans from August to December 2023. Two convolutional neural networks, 3D RetinaNet and U-Net, were employed for automated vertebral body segmentation. The accuracy of the bone mineral density (BMD) measurement was assessed with relative measurement error (RME%). Linear regression and Bland–Altman analyses were performed to compare the BMD values between the AI and manual systems with those of the QCT. The diagnostic performance of the AI and manual systems for osteoporosis and low BMD was evaluated using receiver operating characteristic curve analysis. ResultsThe overall mean RME% for the AI and manual systems were − 15.93 ± 12.05 % and − 25.47 ± 14.83 %, respectively. BMD measurements using the AI system achieved greater agreement with the QCT results than those using the manual system (R2 = 0.973, 0.948, p < 0.001; mean errors, 23.27, 35.71 mg/cm3; 95 % LoA, −9.72 to 56.26, −11.45 to 82.87 mg/cm3). The areas under the curve for the AI and manual systems were 0.979 and 0.933 for detecting osteoporosis and 0.980 and 0.991 for low BMD. ConclusionThis AI system could achieve relatively high accuracy for automated BMD measurement on DECT scans, providing great potential for the follow-up of BMD in osteoporosis screening.

  • Research Article
  • Cite Count Icon 21
  • 10.1038/s41598-022-10739-2
Utility of an artificial intelligence system for classification of esophageal lesions when simulating its clinical use
  • Apr 23, 2022
  • Scientific Reports
  • Ayaka Tajiri + 16 more

Previous reports have shown favorable performance of artificial intelligence (AI) systems for diagnosing esophageal squamous cell carcinoma (ESCC) compared with endoscopists. However, these findings don’t reflect performance in clinical situations, as endoscopists classify lesions based on both magnified and non-magnified videos, while AI systems often use only a few magnified narrow band imaging (NBI) still images. We evaluated the performance of the AI system in simulated clinical situations. We used 25,048 images from 1433 superficial ESCC and 4746 images from 410 noncancerous esophagi to construct our AI system. For the validation dataset, we took NBI videos of suspected superficial ESCCs. The AI system diagnosis used one magnified still image taken from each video, while 19 endoscopists used whole videos. We used 147 videos and still images including 83 superficial ESCC and 64 non-ESCC lesions. The accuracy, sensitivity and specificity for the classification of ESCC were, respectively, 80.9% [95% CI 73.6–87.0], 85.5% [76.1–92.3], and 75.0% [62.6–85.0] for the AI system and 69.2% [66.4–72.1], 67.5% [61.4–73.6], and 71.5% [61.9–81.0] for the endoscopists. The AI system correctly classified all ESCCs invading the muscularis mucosa or submucosa and 96.8% of lesions ≥ 20 mm, whereas even the experts diagnosed some of them as non-ESCCs. Our AI system showed higher accuracy for classifying ESCC and non-ESCC than endoscopists. It may provide valuable diagnostic support to endoscopists.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.euf.2024.07.003
A Novel Deep Learning–based Artificial Intelligence System for Interpreting Urolithiasis in Computed Tomography
  • Dec 1, 2024
  • European Urology Focus
  • Jin Kim + 8 more

A Novel Deep Learning–based Artificial Intelligence System for Interpreting Urolithiasis in Computed Tomography

  • Research Article
  • Cite Count Icon 221
  • 10.1016/s1470-2045(24)00220-1
Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international, paired, non-inferiority, confirmatory study
  • Jun 11, 2024
  • The Lancet Oncology
  • Anindo Saha + 99 more

Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international, paired, non-inferiority, confirmatory study

  • Preprint Article
  • Cite Count Icon 2
  • 10.2196/preprints.64791
Comparative Analysis of AI Systems and Human Nutrition Knowledge: Evaluating ChatGPT and Other AI Systems Against Dietetics Students and the General Population (Preprint)
  • Jul 26, 2024
  • Nicola Luigi Bragazzi + 4 more

BACKGROUND Understanding the core principles of nutrition is essential in today’s world of abundant, often contradictory dietary advice, empowering individuals to make informed dietary choices, crucial for having a proper diet and managing diet-related Non-Communicable Diseases (NCDs). The role of Artificial Intelligence (AI) systems in providing nutritional information is increasingly prominent, but their reliability in this domain is not well-established yet. OBJECTIVE This study compares the nutrition knowledge of state-of-the-art AI systems (ChatGPT-4, Bard, Copilot, and ChatGPT-3.5) with human subjects having different levels of nutrition knowledge. METHODS The “General Nutrition Knowledge Questionnaire–Revised” (GNKQ-R) was administered to four AI systems and human subjects. The AI systems were tested using zero-shot prompts. Responses were scored per the GNKQ’s guidelines across four sections: “Dietary Recommendations”; “Food Groups”; “Healthy Food Choices”; “Diet, Disease and Weight Management”. Human subjects were grouped based on their academic background (dietetics vs English students), age, sex/gender, education level, and health status. RESULTS The average performance of AI systems across all LLMs was 77.3±5.1 out of 88, which comparable to the dietetics students and significantly higher than the English students. ChatGPT-4 scored highest among the AI systems (82/88), surpassing both groups of students (dietetics: 79.3/88, English: 67.7/88) as well as all other demographic groups. In “Dietary Recommendations”, ChatGPT-4 and ChatGPT-3.5 nearly matched dietetics students. ChatGPT-4 excelled in “Food Groups”, outperforming all human groups. In “Healthy Food Choices”, ChatGPT-4 achieved a perfect score, indicating a deep understanding. ChatGPT-3.5 excelled in “Diet, Disease and Weight Management”. Variations in the performances of the AI systems across different sections were observed, suggesting knowledge gaps in certain areas. AI systems, particularly ChatGPT-4 and ChatGPT-3.5, showed proficiency in nutrition knowledge, rivaling or surpassing dietetics students in certain sections. This indicates their potential utility in nutritional guidance. However, there are nuances and specific details where AI systems lack compared to specialized human education. The study highlights the potential of AI in public health and educational settings but also underscores the value of expert human judgment. CONCLUSIONS AI systems show promise in understanding complex subjects like nutrition and can be a valuable adjunct educational tool. However, specialized human education and expertise remain irreplaceable, emphasizing the need for a combined approach of AI systems insights with expert human judgment in nutrition and dietetics.

Save Icon
Up Arrow
Open/Close