Tears and rain – AI and authorship: Challenging concepts of the unique through unauthored rarity
This article explores the evolving nature of authorship and meaning-making in illustration against the backdrop of artificial intelligence (AI)-generative art technologies and image saturation. Moving beyond surface debates around AI obsolescence, it aims to reframe discourse through an interdisciplinary lens of theory, visual culture and futurism. The central hypothesis posits that in our eagerness to declare authorial intent’s ‘death’, we may have failed to recognize how meaning has migrated rather than disappeared. By synthesizing postmodern theory with an altermodernist framework, specifically through the combination of Frederic Jameson’s ‘schizophrenic’ visual culture with Nicolaus Bourriaud’s concept of the ‘Semionaut’. The article claims that authorial significance has shifted into uncharted territories we remain blind to – with critical repercussions for illustrators’ roles as cultural curators, in an era where users of AI can infinitely generate ‘unique’ images of high quality without any illustrative ability. Perhaps the true value for illustrative arts lies in cultivating ‘rareness’ – contextually embedded artefacts imbued with intention that cut through visual noise. This pivot has profound implications for professional practice, ethics and training. The article aims to initiate new dialogues examining these future-facing considerations. This scholarly inquiry emerges from over a decade of critically investigating impacts of computational advances on image production and reception to identify the hidden impact of postmodernism as it accelerates through new technical abilities provided by AI to suggest new authorial mutations.
- Research Article
- 10.1051/e3sconf/202562203005
- Jan 1, 2025
- E3S Web of Conferences
The development of Artificial Intelligence (AI) in digital transformation has a significant impact on people's lives. AI is able to solve complex problems with high accuracy to produce a creation. This raises problems and negative impacts related to copyright. The absence of explicit rules governing the creation of AI leads to legal uncertainty. The 2014 copyright law does not fully cover AI-generated works. Legal analysis shows that AI is just a computer programming tool that performs tasks based on human commands. AI uses algorithms and computer training for recognition, prediction, and decision-making. This research aims to analyze the legal position of AI and the legal status of creations generated by AI. While legal consequences the creator or user of AI is legally responsible when AI infringes the copyrights of others. This study uses qualitative research methods with the approach of legislation and concepts. This study concluded that AI has a role as a producer of creation and innovation. However, AI is not a subject of law, and the legal consequences of works produced by AI depend on the legal responsibilities of the creators or users of the relevant AI.
- Research Article
114
- 10.1287/stsc.2021.0148
- Oct 11, 2021
- Strategy Science
We analyze the sectoral and national systems of firms and institutions that collectively engage in artificial intelligence (AI). Moving beyond the analysis of AI as a general-purpose technology or its particular areas of application, we draw on the evolutionary analysis of sectoral systems and ask, “Who does what?” in AI. We provide a granular view of the complex interdependency patterns that connect developers, manufacturers, and users of AI. We distinguish between AI enablement, AI production, and AI consumption and analyze the emerging patterns of cospecialization between firms and communities. We find that AI provision is characterized by the dominance of a small number of Big Tech firms, whose downstream use of AI (e.g., search, payments, social media) has underpinned much of the recent progress in AI and who also provide the necessary upstream computing power provision (Cloud and Edge). These firms dominate top academic institutions in AI research, further strengthening their position. We find that AI is adopted by and benefits the small percentage of firms that can both digitize and access high-quality data. We consider how the AI sector has evolved differently in the three key geographies—China, the United States, and the European Union—and note that a handful of firms are building global AI ecosystems. Our contribution is to showcase the evolution of evolutionary thinking with AI as a case study: we show the shift from national/sectoral systems to triple-helix/innovation ecosystems and digital platforms. We conclude with the implications of such a broad evolutionary account for theory and practice.
- Discussion
- 10.1097/acm.0000000000004872
- Sep 23, 2022
- Academic Medicine
We would like to thank Webster for his insight on our article, “Artificial intelligence in undergraduate medical education: A scoping review.” Our article highlights the differing views on the impact of artificial intelligence (AI) in medicine. As noted by Webster, studies, including the one by Masters, 1 argue for the disruptive potential of AI tools, postulating the (almost) complete replacement of physicians by AI systems. This perspective contrasts with most studies that discuss the diagnostic and predictive role of AI tools and their capacity to process large data to aid physicians’ medical decision making. Therefore, while we agree that the complete replacement of physicians by AI is unlikely as of now, the expanding role of AI in health care is evident by the growing literature on the collaboration between clinician experts and AI. 2 Furthermore, we agree that the transparency of AI tools is critical in becoming mindful users of AI in the health care setting. However, we cannot completely understand AI’s decision-making process. This has led to pitfalls, including those highlighted by Webster, where an AI software assessed patients with pneumonia and asthma to be at a lower risk of complications than patients with pneumonia alone, as it failed to account for the variable of intensive care unit admission. 3 However, this limitation of AI tools is not a reason to discount AI. Instead, it highlights the importance of AI training in medicine and the involvement of physicians during the development of AI tools. As users of AI, physicians must understand its strengths and limitations, identifying the variables involved in its decision making to ensure the validity of its algorithms. Hence, we believe that the transparency of AI tools is not a binary outcome but a goal that we must continuously strive to achieve. This effort is critical in avoiding the phenomenon of black box AI, especially as the impact of AI on health care is continued and imminent.
- Research Article
57
- 10.3390/educsci13060609
- Jun 15, 2023
- Education Sciences
Artificial Intelligence (AI) is a disruptive technology that nowadays has countless applications in many day-to-day and professional domains. Higher education institutions need to adapt both to changes in their processes and to changes in curricula brought on by AI. Studying students’ attitudes toward AI can be useful for analyzing what changes in AI teaching need to be implemented. This article uses an electronic survey to study the attitudes of Spanish students in the fields of economics and business management and education. A learning experience was also implemented with a small subset of students as a hands-on introduction to AI, where students were prompted to reflect on their experiences as users of AI. The results show that students are aware of AI’s impact and are willing to further their education in AI, although their current knowledge is limited due to a lack of training. We believe that AI education should be expanded and improved, especially by presenting realistic use cases and the real limitations of the technology, so that students are able to use AI confidently and responsibly in their professional future.
- Research Article
7
- 10.3390/jpm13060962
- Jun 7, 2023
- Journal of Personalized Medicine
In the past vicennium, several artificial intelligence (AI) and machine learning (ML) models have been developed to assist in medical diagnosis, decision making, and design of treatment protocols. The number of active pathologists in Poland is low, prolonging tumor patients' diagnosis and treatment journey. Hence, applying AI and ML may aid in this process. Therefore, our study aims to investigate the knowledge of using AI and ML methods in the clinical field in pathologists in Poland. To our knowledge, no similar study has been conducted. We conducted a cross-sectional study targeting pathologists in Poland from June to July 2022. The questionnaire included self-reported information on AI or ML knowledge, experience, specialization, personal thoughts, and level of agreement with different aspects of AI and ML in medical diagnosis. Data were analyzed using IBM® SPSS® Statistics v.26, PQStat Software v.1.8.2.238, and RStudio Build 351. Overall, 68 pathologists in Poland participated in our study. Their average age and years of experience were 38.92 ± 8.88 and 12.78 ± 9.48 years, respectively. Approximately 42% used AI or ML methods, which showed a significant difference in the knowledge gap between those who never used it (OR = 17.9, 95% CI = 3.57-89.79, p < 0.001). Additionally, users of AI had higher odds of reporting satisfaction with the speed of AI in the medical diagnosis process (OR = 4.66, 95% CI = 1.05-20.78, p = 0.043). Finally, significant differences (p = 0.003) were observed in determining the liability for legal issues used by AI and ML methods. Most pathologists in this study did not use AI or ML models, highlighting the importance of increasing awareness and educational programs regarding applying AI and ML in medical diagnosis.
- Research Article
- 10.26623/julr.v7i2.9026
- Jul 7, 2024
- JURNAL USM LAW REVIEW
The purpose of the research is to study and analyze the issue of the use of artificial intelligence (AI) by banks that results in losses to customers. To respond to the issue, this paper argues that banks as AI users should be held criminally liable despite the lack of mens rea in banks as corporations which are legal entities. For this reason, this paper uses the identification theory as an analytical tool for bank criminal liability. The issue is compiled on the results of normative legal research with a statute approach and conceptual approach. The results of the study show that based on the identification theory, the bank as an AI user can be held criminally liable with the public prosecutor must identify the person who committed the criminal act (actus reus) is the management as the controlling personnel (directing mind or controlling mind). The location of the AI mens rea is in the approval of the corporate controller to use AI, meaning that the approval is interpreted as the inner attitude of the controller to accept the risks arising from the use of AI. Tujuan dari penelitian untuk menganalisis isu penggunaan artificial intelligence (AI) oleh bank yang berakibat pada kerugian bagi nasabah. Untuk menanggapi isu tersebut, tulisan ini berpendapat bahwa bank sebagai pengguna AI patut dimintai pertanggungjawaban pidana meskipun tidak mempunyai mens rea pada bank sebagai korporasi yang merupakan badan hukum. Untuk itu, tulisan ini menggunakan teori identifikasi sebagai pisau analisis terhadap pertanggungjawaban pidana bank. Isu tersebut disusun atas hasil penelitian hukum normatif dengan pendekatan undang-undang (statute approach) dan pendekatan konseptual (conceptual approach). Hasil penelitian menunjukan, berdasarkan teori identifikasi, bank sebagai pengguna AI dapat dimintai pertanggungjawaban pidana dengan penuntut umum harus mengidentifikasi yang melakuan perbuatan pidana (actus reus) adalah pengurus sebagai personil pengendali (directing mind atau controlling mind). Letak mens rea AI ada pada persetujuan pengendali korporasi menggunakan AI, artinya sikao persetujuan tersebit dimaknai sebagai sikap batin pengali untuk menerima resiko yang timbul akibat penggunaai AI.
- Research Article
8
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Research Article
26
- 10.3389/fgene.2022.902542
- Aug 15, 2022
- Frontiers in genetics
Introduction: “Democratizing” artificial intelligence (AI) in medicine and healthcare is a vague term that encompasses various meanings, issues, and visions. This article maps the ways this term is used in discourses on AI in medicine and healthcare and uses this map for a normative reflection on how to direct AI in medicine and healthcare towards desirable futures. Methods: We searched peer-reviewed articles from Scopus, Google Scholar, and PubMed along with grey literature using search terms “democrat*”, “artificial intelligence” and “machine learning”. We approached both as documents and analyzed them qualitatively, asking: What is the object of democratization? What should be democratized, and why? Who is the demos who is said to benefit from democratization? And what kind of theories of democracy are (tacitly) tied to specific uses of the term? Results: We identified four clusters of visions of democratizing AI in healthcare and medicine: 1) democratizing medicine and healthcare through AI, 2) multiplying the producers and users of AI, 3) enabling access to and oversight of data, and 4) making AI an object of democratic governance. Discussion: The envisioned democratization in most visions mainly focuses on patients as consumers and relies on or limits itself to free market-solutions. Democratization in this context requires defining and envisioning a set of social goods, and deliberative processes and modes of participation to ensure that those affected by AI in healthcare have a say on its development and use.
- Research Article
1
- 10.1177/23779608251330866
- Jan 1, 2025
- SAGE open nursing
The use of artificial intelligence (AI) in healthcare in general and scientific research in particular has become increasingly prevalent as it holds great promise for optimizing research processes and outcomes. This study described predictors and differences in students' perceptions of the risks and benefits related to using AI in nursing research. A quantitative transverse study was implemented utilizing a convenient sample of 434 nursing students from a governmental university. Data were analyzed using many descriptive and inferential statistics. Nursing students perceived AI in nursing research positively, with an overall mean score of 3.24/5 (SE = .024). Their feelings about AI were generally positive (Mean = 3.54/5; SE = .049; 95% CI = 3.45-3.64). Perceived risks of using AI in research were high (Mean = 1.59/2, SE = .016), especially concerning liability issues (Mean = 3.50/5, SE = .031), communication barriers (Mean = 3.48, SE = .035), unregulated standards (Mean = 3.37, SE = .034), privacy concerns (Mean = 3.37, SE = .034), social biases (Mean = 3.33, SE = .033), performance anxiety (Mean = 3.31, SE = .034), and mistrust in AI mechanisms (Mean = 3.28, SE = .032). The perceived benefits were also high (Mean = 3.46, SE = .030), with a strong intention to use AI-based tools (Mean = 3.52, SE = .033). Key predictors were high GPA and training in public hospitals. hospitals. AI in nursing research has many benefits; however, it comes with risks that need immediate management. Nursing students' GPAs and the hospitals where they received their training were often the key factors that shaped how well they understood the use of AI in nursing research. High-achieving students who were trained in public and teaching hospitals tend to be better users of AI in nursing research.
- Research Article
1
- 10.2139/ssrn.3052154
- Oct 13, 2017
- SSRN Electronic Journal
This paper explores issues relating to two fundamental questions, how mitigate the risks associated with a constant expansion in autonomy of artificial intelligence (AI) and how to make people liable for their AI creations. The aim of this paper is to only highlight issues fundamental to the constituents of the legal regulatory structure for autonomous AI systems at following levels: (i) desirable attributes of AI, (ii) legal approach and principles to determine liability, and (iii) contractual arrangement governing relationship between developers and users of AI.
- Research Article
17
- 10.31992/0869-3617-2022-31-7-79-95
- Jul 21, 2022
- Vysshee Obrazovanie v Rossii = Higher Education in Russia
Due to the growing interest in artificial intelligence in recent years, teaching this discipline to students of applied technical specialties is becoming relevant. Despite the formation of this scientific direction for almost 70 years, there is still no clear understanding of the terminology of this science, its tasks at the present stage and its application in engineering education. Moreover, the artificial intelligence terminology often misleads students. The article examines the current situation with the development of ideas related artificial intelligence, the possibility of its using in engineering education. Based on the analysis of the real possibilities of artificial intelligence, the actual content of education in the discipline of “Artificial intelligence in transport construction” is determined. The article focuses on users of artificial intelligence, not developers. The authors consider the competencies of a specialist that can be formed during the study of the above indicated discipline, as well as new relevant competencies that are necessary for a specialist in connection with the wide dissemination of artificial intelligence in the conditions of his/her professional activity. The functional model of artificial intelligence used in teaching students how to interact with it is considered. The article gives the examples of tasks solved by students today with the help of artificial intelligence technology during trial training.
- Research Article
1
- 10.61404/jimi.v1i1.4
- Jul 18, 2023
- Mutiara : Jurnal Ilmiah Multidisiplin Indonesia
This article discusses issues related to the existence of artificial intelligence as a legal subject, and criminal liability when artificial intelligence commits criminal acts. The purpose of this study is to find out the categorization of artificial intelligence as a legal object or legal subject, and to find out to whom criminal responsibility is assigned when artificial intelligence commits a crime. The research method used in writing this article is normative legal research, with a conceptual approach. The results of this study are that artificial intelligence is not a legal subject, because the actions carried out by artificial intelligence are only orders from its users, and for criminal acts committed by artificial intelligence, those who must be responsible are the creators of artificial intelligence or users of artificial intelligence
- Research Article
2
- 10.1088/1742-6596/1399/3/033098
- Dec 1, 2019
- Journal of Physics: Conference Series
It is assumed that objects of artificial intelligence will increasingly invade social space, rebuild the system of social ties, and this restructuring will affect all areas of human activity. Already, nowadays, our life in many respects depends on the decisions made by artificial intelligence. The article discusses artificial intelligence algorithms that can be used in business. As artificial intelligence becomes more popular, most of the company’s employees will need to undergo training to become users of artificial intelligence. They will learn how to use corporate applications based on artificial intelligence, manage data properly and seek help from experts if necessary. The task is to find solutions that will help in the near future to automate routine, template processes. That is tasks that do not require special skills but take the time of qualified employees.
- Research Article
2
- 10.1097/jte.0000000000000381
- Oct 21, 2024
- Journal, physical therapy education
Generative artificial intelligence (AI) is rapidly gaining popularity across health care, education, and society. The purpose of this study was to assess perceptions and use of generative AI in academic physical therapy (PT). Generative AI became one of the fastest-growing technologies ever after the public release of ChatGPT in November 2022. Early data indicate that attitudes toward generative AI in higher education are mixed and rapidly evolving, with significant ethical concerns around use and potential misuse. There are no published studies investigating perceptions and use of generative AI in PT education. A total of 175 surveys were completed and analyzed. Respondents included PT educators, administrators, and students. An anonymous, online survey on use and perception of AI was distributed through email and social media. Descriptive statistics and cross-tabulations were performed to analyze respondent characteristics and responses to survey questions. Most respondents (61.1%) reported they did not use generative AI during the 2022-2023 academic year, whereas 35.4% were generative AI users. More than 40% of respondents were optimistic or very optimistic toward generative AI. Users of AI were more likely to report an optimistic or very optimistic disposition toward AI compared with nonusers. AI users were more likely to agree or completely agree that generative AI has more benefits than drawbacks compared with nonusers. Results of this survey suggest that, despite the rapid uptake of generative AI in society, many PT educators and students harbor reservations and uncertainties toward its use. Artificial intelligence users were less likely to hold negative perceptions toward it and were more likely to find it useful. Understanding use and perceptions of generative AI in PT education may inform strategies to promote innovation, policy-making, and ethical integration of this new and rapidly evolving technology.
- Research Article
170
- 10.1108/jices-12-2019-0138
- Jun 9, 2020
- Journal of Information, Communication and Ethics in Society
Purpose The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice. Design/methodology/approach In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI. Findings In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems. Originality/value The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.