Students’ Statistical Thinking when Using Generative AI: A Descriptive Case Study
Generative artificial intelligence (AI) technologies are transforming the world of education, but their impact on students’ thinking and learning remains unclear. In this study, we investigated AI’s potential role in supporting statistical thinking by interviewing undergraduate students (N = 5) as they completed a graphing task using Rtutor.AI, an AI-powered tool that integrates ChatGPT with R. The analysis yielded five key themes that describe students’ statistical thinking while using Rtutor.AI. The first theme demonstrated how the iterative and intuitive nature of prompting within Rtutor.AI shaped participants’ approaches to problem-solving. The next three themes—”Building statistical understanding through a step-by-step process,” “Identifying key elements of a problem to create specific prompts,” and “Lowering barriers to completion of statistical tasks”—illustrated how Rtutor.AI facilitated statistical thinking in various ways. The fifth theme showed that students’ prior statistical knowledge influenced their ability to interpret and contextualize Rtutor.AI’s output, and that Rtutor.AI did not fully absolve students of the need to think statistically. Overall, these results highlight both the potential benefits and risks of incorporating AI into statistics classrooms and can serve as an empirical basis for future scholarship aimed at creating scaffolding around AI technologies to support statistics instruction.
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
162
- 10.3390/cancers12123532
- Nov 26, 2020
- Cancers
Simple SummaryArtificial intelligence (AI) technology has been advancing rapidly in recent years and is being implemented in society. The medical field is no exception, and the clinical implementation of AI-equipped medical devices is steadily progressing. In particular, AI is expected to play an important role in realizing the current global trend of precision medicine. In this review, we introduce the history of AI as well as the state of the art of medical AI, focusing on the field of oncology. We also describe the current status of the use of AI for drug discovery in the oncology field. Furthermore, while AI has great potential, there are still many issues that need to be resolved; therefore, we would provide details on current medical AI problems and potential solutions.In recent years, advances in artificial intelligence (AI) technology have led to the rapid clinical implementation of devices with AI technology in the medical field. More than 60 AI-equipped medical devices have already been approved by the Food and Drug Administration (FDA) in the United States, and the active introduction of AI technology is considered to be an inevitable trend in the future of medicine. In the field of oncology, clinical applications of medical devices using AI technology are already underway, mainly in radiology, and AI technology is expected to be positioned as an important core technology. In particular, “precision medicine,” a medical treatment that selects the most appropriate treatment for each patient based on a vast amount of medical data such as genome information, has become a worldwide trend; AI technology is expected to be utilized in the process of extracting truly useful information from a large amount of medical data and applying it to diagnosis and treatment. In this review, we would like to introduce the history of AI technology and the current state of medical AI, especially in the oncology field, as well as discuss the possibilities and challenges of AI technology in the medical field.
- Research Article
- 10.52554/kjcl.2024.107.225
- Jun 30, 2024
- The Korean Association of Civil Law
The recent development of artificial intelligence (AI) technology is bringing about changes at a faster pace and on a larger scale than any other period in human history. With technological advancements overcoming the limitations of medical AI through training with databases, AI technology has made remarkable progress since the inception of deep learning for image processing with convolutional neural networks (CNN) in 2012. The recent advancements in natural language processing (NLP) have accelerated the utilization of AI through sophisticated natural language processing, enabling machines to identify and understand data regardless of the complexity of the language. This has laid the foundation for the rapid and precise development of generative AI. In the era where generative AI is being utilized without pausing in its developmental speed, we considered the civil liability of AI in our civil law principles, taking into account the inherent characteristics of AI such as unpredictability, opacity, and the black box effect. To do this, we first examined the legal liability considering the stages of AI technology development in discussing the tort liability caused by AI. Even “Weak AI,” created by AI developers, may fall under “Gefahr,” and while not all types, some may apply to strict liability in terms of risk liability. Furthermore, while reviewing civil liability applicable to AI under fault-based and no-fault liability, we also looked at the trends in the EU comparatively. In discussing no-fault liability, particularly under the Product Liability Act, we examined the possibility and implications of applying risk liability to pharmaceutical manufacturing using generative AI technology as a representative example to overcome the limitations of the existing Product Liability Act. Humanity currently lives in an era of rapid technological development and exploding big data, enjoying numerous benefits due to these advancements. As user convenience improves and massive added value is created through technological progress, the meaning of risk liability in the realm of civil liability can gain more significance. Generative AI has already drastically reduced the costs and time required for new drug development, providing substantial profits to pharmaceutical companies. However, even if the existing Product Liability Act is applied, it may be difficult to adequately remedy the harm to victims due to the reasonable alternative possibility defense regarding design defects. In the era of generative AI, we examined the possibility of applying enhanced risk liability by assuming the case of pharmaceutical manufacturing.
- Research Article
137
- 10.3389/fpsyg.2022.971044
- Jan 17, 2023
- Frontiers in psychology
Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
- Research Article
2
- 10.3390/rel15010079
- Jan 9, 2024
- Religions
Humanistic Buddhism is one of the mainstreams of modern Buddhism, with special emphasis on the humanistic dimension. With the development of artificial intelligence (AI) technology, Humanistic Buddhism is also at an important stage of modernization and transformation, thus facing a continuous negotiation between religious values and technological innovations. This paper first argues that AI is technically beneficial to the propagation of Buddhism by citing several cases in which AI technology has been used in Buddhism. Then, by comparing Master Hsing Yun’s Buddhist ethics to “Posthuman” ethics, it points out that the theories of Humanistic Buddhism share similarities with AI and Posthuman ethics. Among them, Master Hsing Yun’s theory of “the nature of insentient beings” provides an important theoretical reference for the question of “whether AI can become a Buddha”. From the technical and ethical dimensions, it points out that the interaction between Humanistic Buddhism and AI can promote original uses or implementations of AI technology. However, it should also be noted that compared to the cases of “Artificial Narrow Intelligence”discussed in the paper, the “Strong AI” could lead to much more ethical crises. It is also likely to cause the cult of science and technology, and thus subvert the humanistic tradition of Buddhism with a new instrumental rationality. In addition, there are some potential pitfalls that Humanistic Buddhism may encounter when using AI. Hence, while it is necessary to encourage the use of technologies such as AI in contemporary Buddhism, it is also important for Buddhism to keep a critical distance from digital technologies.
- Preprint Article
- 10.5194/egusphere-egu25-19301
- Mar 15, 2025
The rapid advancement of artificial intelligence (AI) technology is driving transformative changes across society. However, this progress also entails significant resource and energy demand, posing substantial new challenges to the Earth’s ecosystems. Specifically, the environmental impacts arising from AI model training and inference, data center operations, and the manufacturing and disposal of electronic devices threaten the balance of ecosystem material cycles and could exacerbate climate change. Therefore, it is urgently needed to understand the effects of generative AI technology growth on ecosystem material cycles and to identify sustainable AI technology development and application strategies. This study aims to quantitatively assess the resource consumption (including metals, plastics, and water), exergy use (primarily through electricity demand and fossil fuels), and greenhouse gas emissions associated with the anticipated growth of generative AI technology and its consequent impacts on ecosystem material cycles. First, we analyze resource and exergy use within the generative AI industry, encompassing AI model training and inference, data center operations, and the production of AI chips and devices. We quantify the consumption of key elements and water, alongside the exergy demand for electricity and fossil fuels. We employ a Life Cycle Assessment (LCA) methodology to evaluate the comprehensive environmental footprint of AI technology. Second, we examine the environmental impact of AI-related waste by evaluating the generation, treatment processes, and ecosystem effects of electronic waste (including AI chips, devices, and data center equipment). This analysis focuses on the environmental leakage pathways of hazardous and plastic waste and the patterns of material movement within the ecosystem, particularly with regards to soil and water pollution and biodiversity loss. Third, we model the impact of generative AI technology on key ecosystem material cycles, such as carbon, nitrogen, and phosphorus. We estimate changes in resource use, exergy consumption, and waste generation under multiple AI technology growth scenarios. Finally, we propose strategies for the sustainable development and application of AI technologies. Based on our findings, we will formulate concrete policy and technical recommendations for developing and implementing resource-efficient and low-exergy-consuming AI technologies.
- Front Matter
2
- 10.1016/j.jaip.2023.04.034
- Jul 1, 2023
- The Journal of Allergy and Clinical Immunology: In Practice
Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?
- Research Article
1
- 10.59562/semnasdies.v1i1.811
- Jul 29, 2023
- SEMINAR NASIONAL DIES NATALIS 62
Art education has undergone significant transformation in the era of digitalization. With the advancement of Artificial Intelligence (AI) technology, there is tremendous potential to enhance the art learning experience. This research aims to elucidate how AI technology can be effectively utilized to improve art education in the context of digitalization. The study aims to introduce AI technology and explore its potential applications in enhancing the art learning experience. In the context of art learning experience, AI technology can provide broader access to digital art collections, online galleries, and virtual museums. Students can explore various artworks and cultures through digital platforms, thereby enriching their understanding and appreciation of art. The research findings reveal that AI technology can offer personalized guidance to students in developing their artistic skills. Intelligent AI systems can analyze students' artworks, provide tailored suggestions, and assist in enhancing their technical and creative abilities. In conclusion, the integration of AI technology in art education holds immense potential to elevate students' learning experiences. By expanding access, providing personalized guidance, and creating interactive experiences, AI technology enriches art learning in the era of digitalization. This is expected to provide greater insights and understanding of how AI technology can be harnessed to enhance the art learning experience in the context of digitalization. By optimizing the potential of AI, art education can become more inclusive, personalized, and engaging.
- Research Article
- 10.28945/5354
- Jan 1, 2024
- Interdisciplinary Journal of Information, Knowledge, and Management
Aim/Purpose: The rise of modern artificial intelligence (AI), in particular, machine learning (ML), has provided new opportunities and directions for knowledge management (KM). A central question for the future of KM is whether it will be dominated by an automation strategy that replaces knowledge work or whether it will support a knowledge-enablement strategy that enhances knowledge work and uplifts knowledge workers. This paper addresses this question by re-examining and updating a critical argument against KM by the sociologist of science Steve Fuller (2002), who held that KM was extractive and exploitative from its origins. Background: This paper re-examines Fuller’s argument in light of current developments in artificial intelligence and knowledge management technologies. It reviews Fuller’s arguments in its original context wherein expert systems and knowledge engineering were influential paradigms in KM, and it then considers how the arguments put forward are given new life in light of current developments in AI and efforts to incorporate AI in the KM technical stack. The paper shows that conceptions of tacit knowledge play a key role in answering the question of whether an automating or enabling strategy will dominate. It shows that a better understanding of tacit knowledge, as reflected in more recent literature, supports an enabling vision. Methodology: The paper uses a conceptual analysis methodology grounded in epistemology and knowledge studies. It reviews a set of historically important works in the field of knowledge management and identifies and analyzes their core concepts and conceptual structure. Contribution: The paper shows that KM has had a faulty conception of tacit knowledge from its origins and that this conception lends credibility to an extractive vision supportive of replacement automation strategies. The paper then shows that recent scholarship on tacit knowledge and related forms of reasoning, in particular, abduction, provide a more theoretically robust conception of tacit knowledge that supports the centrality of human knowledge and knowledge workers against replacement automation strategies. The paper provides new insights into tacit knowledge and human reasoning vis-à-vis knowledge work. It lays the foundation for KM as a field with an independent, ethically defensible approach to technology-based business strategies that can leverage AI without becoming a merely supporting field for AI. Findings: Fuller’s argument is forceful when updated with examples from current AI technologies such as deep learning (DL) (e.g., image recognition algorithms) and large language models (LLMs) such as ChatGPT. Fuller’s view that KM presupposed a specific epistemology in which knowledge can be extracted into embodied (computerized) but disembedded (decontextualized) information applies to current forms of AI, such as machine learning, as much as it does to expert systems. Fuller’s concept of expertise is narrower than necessary for the context of KM but can be expanded to other forms of knowledge work. His account of the social dynamics of expertise as professionalism can be expanded as well and fits more plausibly in corporate contexts. The concept of tacit knowledge that has dominated the KM literature from its origins is overly simplistic and outdated. As such, it supports an extractive view of KM. More recent scholarship on tacit knowledge shows it is a complex and variegated concept. In particular, current work on tacit knowledge is developing a more theoretically robust and detailed conception of human knowledge that shows its centrality in organizations as a driver of innovation and higher-order thinking. These new understandings of tacit knowledge support a non-extractive, human enabling view of KM in relation to AI. Recommendations for Practitioners: Practitioners can use the findings of the paper to consider ways to implement KM technologies in ways that do not neglect the importance of tacit knowledge in automation projects (which neglect often leads to failure). They should also consider how to enhance and fully leverage tacit knowledge through AI technologies and augment human knowledge. Recommendation for Researchers: Researchers can use these findings as a conceptual framework in research concerning the impact of AI on knowledge work. In particular, the distinction between replacement and enabling technologies, and the analysis of tacit knowledge as a structural concept, can be used to categorize and analyze AI technologies relative to KM research objectives. Impact on Society: The potential of AI on employment in the knowledge economy is a major issue in the ethics of AI literature and is widely recognized in the popular press as one of the pressing societal risks created by AI and specific types such as generative AI. This paper shows that KM, as a field of research and practice, does not need to and should not add to the risks created by automation-replacement strategies. Rather, KM has the conceptual resources to pursue a (human) knowledge enablement approach that can stand as a viable alternative to the automation-replacement vision. Future Research: The findings of the paper suggest a number of research trajectories. They include: Further study of tacit knowledge and its underlying cognitive mechanisms and structures in relation to knowledge work and KM objectives. Research into different types of knowledge work and knowledge processes and the role that tacit and explicit knowledge play. Research into the relation between KM and automation in terms of KM’s history and current technical developments. Research into how AI arguments knowledge works and how KM can provide an enabling framework.
- Research Article
75
- 10.1177/14604582211011215
- Apr 1, 2021
- Health Informatics Journal
Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients' understanding of radiology imaging data. The aim of this study is to understand patients' perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor's opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people's trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients' concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
- Research Article
9
- 10.1515/ijdlg-2024-0015
- Oct 28, 2024
- International Journal of Digital Law and Governance
The explosive advancement of contemporary artificial intelligence (AI) technologies, typified by ChatGPT, is steering humans towards an uncontrollable trajectory to artificial general intelligence (AGI). Against the backdrop of a series of transformative breakthroughs, big tech companies such as OpenAI and Google have initiated an “AGI race” on a supranational level. As technological power becomes increasingly absolute, structural challenges may erupt with an unprecedented velocity, potentially resulting in disorderly expansion and even malignant development of AI technologies. To preserve the dignity and safety of human-beings in a brand-new AGI epoch, it is imperative to implement regulatory guidelines to limit the applications of AGI within the confines of human ethics and rules to further counteract the potential downsides. To promote the benevolent evolution of AGI, the principles of Humanism should be underscored and the connotation of Digital Humanism should be further enriched. Correspondingly, the current regulatory paradigm for generative AI may also be overhauled under the tenet of Digital Humanism to adapt to the quantum leaps and subversive shifts produced by AGI in the future. Positioned at the nexus of legal studies, computer science, and moral philosophy, this study therefore charts a course for a synthetic regulation framework of AGI under Digital Humanism.
- Research Article
- 10.15226/2474-9257/5/1/00147
- Jan 1, 2020
- Journal of Computer Science Applications and Information Technology
Technology based on artificial intelligence (AI) is a revolutionary force that is changing economies, civilizations, and industries all over the world. AI, which has its roots in computer science and cognitive psychology, is a wide range of tools and methods designed to make robots capable of doing activities that have historically required human intellect. This abstract examines the many facets of artificial intelligence (AI) technology, including its fundamentals, uses, difficulties, and ramifications. Artificial Intelligence (AI) technology comprises several subfields such as robotics, computer vision, natural language processing, machine learning, and expert systems. Particularly, machine learning techniques have propelled incredible progress by allowing computers to learn from data and make judgments or predictions without the need for explicit programming. Natural language processing allows machines to comprehend, interpret, and produce human language, hence facilitating human-computer interaction. Machines can now see, analyze, and interpret visual data from the real world thanks to computer vision technology. Applications of AI technology may be found in a wide range of industries, including manufacturing, healthcare, finance, transportation, agriculture, education, and entertainment. AI-powered solutions help in drug discovery, medical imaging analysis, diagnosis, and customized therapy in the healthcare industry. AI algorithms are used in finance to power automated trading, fraud detection, risk assessment, and customer support. AI makes it possible for transportation to include predictive maintenance, traffic management, and driverless cars. Artificial Intelligence enhances supply chain management, quality assurance, and production processes in manufacturing. AI technology has the potential to revolutionize many industries, but it also comes with dangers and problems. These include privacy concerns, security hazards, ethical dilemmas, issues with prejudice and fairness, and effects on society and employment. Responsible AI methods, legal frameworks, multidisciplinary cooperation, and ethical standards are all necessary to meet these issues. Future prospects for AI technology development include the ability to solve challenging issues, spur creativity, increase productivity, and improve quality of life. But to fully utilize AI, one must take a comprehensive strategy that strikes a balance between the advancement of technology and ethical issues, human values, and social well-being. In summary, artificial intelligence (AI) technology is at the vanguard of innovation, presenting never-before-seen possibilities to transform whole sectors, spur economic expansion, and tackle global issues. AI has the ability to usher in a future of greater human-machine collaboration, innovation, and wealth through the promotion of collaboration, transparency, and ethical stewardship. the Ranking of the Artificial Intelligence using the TOPSIS Method . Interpretable Models is got the first rank whereas is the Ethical AI is having the Lowest rank. Keywords: Explainable AI (XAI), Interpretable Models, Ethical AI ,Responsible AI, Robustness and Adversarial Defense, Continual Learning, Federated Learning, Human-Centric AI, AI Governance and Policy
- Research Article
1
- 10.1108/tg-08-2025-0240
- Dec 4, 2025
- Transforming Government: People, Process and Policy
Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.
- Front Matter
- 10.1088/1742-6596/2078/1/011001
- Nov 1, 2021
- Journal of Physics: Conference Series
We are glad to introduce you that the 2021 3rd International Conference on Artificial Intelligence Technologies and Applications (ICAITA 2021) was successfully held on September 10-12, 2021. In light of worldwide travel restriction and the impact of COVID-19, ICAITA 2021 was carried out in the form of virtual conference to avoid personnel gatherings. Because most participants were still highly enthusiastic about participating in this conference, we chose to carry out ICAITA 2021 via online platform according to the original schedule instead of postponing it.ICAITA 2021 is to bring together innovative academics and industrial experts in the field of Artificial Intelligence Technologies and Applications to a common forum. The primary goal of the conference is to promote research and developmental activities in Artificial Intelligence Technologies and Applications and another goal is to promote scientific information interchange between researchers, developers, engineers, students, and practitioners working all around the world. The conference will be held every year to make it an ideal platform for people to share views and experiences in Artificial Intelligence Technologies and Applications and related areas.This scientific event brings together more than 100 national and international researchers in artificial intelligence technologies and applications. During the conference, the conference model was divided into three sessions, including oral presentations, keynote speeches, and online Q&A discussion. In the first part, some scholars, whose submissions were selected as the excellent papers, were given about 5-10 minutes to perform their oral presentations one by one. Then in the second part, keynote speakers were each allocated 30-45 minutes to hold their speeches.We were pleased to invite three distinguished experts to present their insightful speeches. Our first keynote speaker, Prof. Yau Kok Lim, from Sunway University, Malaysia. His research interests include Applied artificial intelligence, 5G networks, Cognitiveradio networks, Routing and clustering, Trust and reputation, Intelligent transportation system. And then we had Prof. Peter Sincak, from Technical University of Kosice, Slovakia. His research includes Artificial Intelligence and Intelligent Systems. Lastly, we were glad to invite Chinthaka Premachandra, from Shibaura Institute of Technology, Sri Lanka. His research interests include Artificial Intelligence, image processing and robotics. In the last part of the conference, all participants were invited to join in a WeChat group to discuss and explore the academic issues after the presentations. The online discussion was lasted for about 30-60 minutes. The first two parts were conducted via online collaboration tool, Zoom, while the online discussion was carried out through instant communication tool, WeChat. The online platform enabled all participants to join this grand academic event from their own home.We are glad to share with you that we still received lots of submissions from the conference during this special period. Hence, we selected a bunch of high-quality papers and compiled them into the proceedings after rigorously reviewed them. These papers feature following topics but are not limited to: Artificial Intelligence Applications & Technologies, Computing and the Mind, Foundations of Artificial Intelligence and other related topics. All the papers have been through rigorous review and process to meet the requirements of international publication standard.Lastly, we would like to express our sincere gratitude to the Chairman, the distinguished keynote speakers, as well as all the participants. We also want to thank the publisher for publishing the proceedings. May the readers could enjoy the gain some valuable knowledge from the proceedings. We are expecting more and more experts and scholars from all over the world to join this international event next year.The Committee of ICAITA 2021List of titles Committee member, General Conference Chair, Technical Program Committee Chair, Academic Committee Chair, Technical Program Committee Member, Academic Committee Member are available in this Pdf.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.