"I look at it as the king of knowledge": How Blind People Use and Understand Generative AI Tools
The proliferation of Generative Artificial Intelligence (GenAI) tools has brought a critical shift in how people approach information retrieval and content creation in diverse contexts. Yet, we have limited understanding of how blind people use and make sense of GenAI systems. To bridge this gap, we report findings from interviews with 19 blind individuals who incorporate mainstream GenAI tools like ChatGPT and Be My AI in their everyday practices. Our findings reveal how blind users navigate accessibility issues, inaccuracies, hallucinations, and idiosyncracies associated with GenAI and develop interesting (but often flawed) mental models of how these tools work. We discuss key considerations for rethinking access and information verification in GenAI tools, unpacking erroneous mental models among blind users, and reconciling harms and benefits of GenAI from an accessibility perspective.
- Research Article
3
- 10.3390/publications13020014
- Mar 25, 2025
- Publications
This study evaluates the efficiency and accuracy of Generative AI (GAI) tools, specifically ChatGPT and Gemini, in comparison with traditional academic databases for industrial engineering research. It was conducted in two phases. First, a survey was administered to 101 students to assess their familiarity with GAIs and the most commonly used tools in their academic field. Second, an assessment of the quality of the information provided by GAIs was carried out, in which 11 industrial engineering professors participated as evaluators. The study focuses on the query process, response times, and information accuracy, using a structured methodology that includes predefined prompts, expert validation, and statistical analysis. A comparative assessment was conducted through standardized search workflows developed using the Bizagi tool, ensuring consistency in the evaluation of both approaches. Results demonstrate that GAIs significantly reduce query response times compared to conventional databases, although the accuracy and completeness of responses require careful validation. A Chi-Square analysis was performed to statistically assess accuracy differences, revealing no significant disparities between the two AI tools. While GAIs offer efficiency advantages, conventional databases remain essential for in-depth literature searches requiring high levels of precision. These findings highlight the potential and limitations of GAIs in academic research, providing insights into their optimal application in industrial engineering education.
- Research Article
17
- 10.3126/eltp.v9i1-2.68716
- Aug 13, 2024
- English Language Teaching Perspectives
Generative AI (GenAI) tools such as ChatGPT, Gemini and Copilot have created concerns in academia, particularly after the launch of ChatGPT. GenAI and AI have been the buzz words and academics are discussing about the possibilities of its positive and negative impacts on educations and research. Recently, studies have been conducted on the influence of GenAI tools in education and research. With the above concerns and the impact of GenAI, grounded on Vygotsky's Zone of Proximal Development (ZPD) as a theoretical lens, this study explores how English language teachers integrate GenAI tools to enhance teaching and learning. Particularly, this study explores the integration of GenAI tools in English language teaching and learning, focusing on teaching efficiency, student engagement, personalized learning, and writing skills, subscribing to exploratory research methods grounded on semi-structured interviews. The findings of the study affirmed the positive impact of GenAI tools on teaching efficiency, students’ engagement, and writing skills. The results indicated that GenAI positively influences teaching efficiency and student engagement in learning. The implications of this research highlighted the potential of GenAI tools to create a more intelligent and personalized learning environment for English language teaching that benefits both educators and learners.
- Research Article
7
- 10.1111/bjet.13613
- Jul 29, 2025
- British Journal of Educational Technology
There is a heightened concern over undergraduate students being over‐reliant on Generative AI and using it recklessly. Reliance behaviours describe the frequencies and ways that people use AI tools for tasks such as problem‐solving, influenced by individual factors such as trust and AI literacy. One way to conceptualise reliance is that reliance behaviours are affected by the extent to which learners consciously evaluate the relative performance of AI and humans, suggesting the potential impacts of critical thinking on reliance. This study, thus, empirically investigates the relationship between critical thinking and reliance behaviours. Critical thinking includes disposition and skills. However, limited empirical studies have investigated how critical thinking influences learners' reliance behaviours when solving problems with Generative AI. Hence, the current study conducted path analyses to investigate how critical thinking is associated with reliance behaviours and how it mediates the effect of individual factors on reliance behaviours. We collected 808 survey responses on critical thinking disposition and skills, reliance behaviours (a self‐developed and validated scale, including reflective use, cautious use, thoughtless use, and collaborative use), trust towards AI, and AI literacy from undergraduates after a problem‐solving task with Generative AI. The results indicate that (1) critical thinking is positively associated with the collaborative, reflective, and cautious use of Generative AI, suggesting that these three types of use of Generative AI could be considered desirable behaviours in human–AI problem‐solving; (2) trust positively predicts thoughtless use; (3) critical thinking can offset the influence of trust on collaborative, reflective and cautious use; and (4) critical thinking can amplify the influence of AI literacy on reflective, cautious and collaborative use. This study contributes new insights into understanding the role of critical thinking in fostering desirable reliance behaviours, including reflective, cautious and collaborative use, and provides implications for future interventions when applying Generative AI for problem‐solving. Practitioner notes What is already known about this topic? Generative AI tools can potentially enhance problem‐based learning (PBL) by supporting brainstorming and solution refinement. Reliance behaviours in human‐AI collaboration are influenced by factors such as trust in AI and AI literacy. Strategy‐graded reliance emphasizes the reasoning process leading to reliance behaviours, focusing on thoughtful engagement with AI tools, and this cognitive process can be captured by critical thinking. What this paper adds? Critical thinking is positively associated with the reflective, collaborative, and cautious use of Generative AI. Critical thinking mediates the effects of trust and AI literacy on reliance behaviours, amplifying reflective, cautious and collaborative use while mitigating the thoughtless use of Generative AI. The study introduces a nuanced understanding of reliance behaviours by applying a strategy‐graded framework, emphasising cognitive engagement rather than a purely outcome‐based understanding of reliance behaviours. Implications for practice and/or policy Educational interventions could consider critical thinking when integrating AI tools in problem‐solving contexts. Students' trust in AI needs to be balanced with critical thinking skills to reduce overreliance and enhance thoughtful engagement with AI tools.
- Research Article
- 10.1002/jls.70014
- Aug 28, 2025
- Journal of Leadership Studies
As generative AI (GenAI) tools rapidly evolve and become more accessible, their application in leadership education and research demands critical reflection and experimentation. The current practitioner‐focused study presents two use cases exploring how GenAI tools—including Retrieval‐augmented generation platforms like NotebookLM and large language models like ChatGPT and Claude—can support qualitative data analysis in leadership contexts. The first case analyzes open‐ended responses from 237 participants about their “best” and “worst” bosses, while the second examines semi‐structured interviews from a phenomenological study of leadership educators. These methods were piloted with graduate students through a three‐way comparison methodology. Students conducted AI‐assisted analysis, compared findings with expert human coding, and examined peer variations in analytical approaches. The comparative analysis reveals key differences across AI tools regarding transparency, analytic depth, usability, and ethical implications, highlighting both affordances and limitations, including variable output quality, learning curves, and the need for methodological rigor. Student outcomes demonstrate that AI tools can effectively support various phases of qualitative methodology while requiring human oversight for interpretive depth, bias detection, and validation of outputs. GenAI can be a helpful analytical partner in leadership research when integrated thoughtfully through pedagogical frameworks emphasizing human–AI collaboration rather than replacement, preparing emerging researchers to leverage technological capabilities while maintaining—and at times enhancing—the interpretive richness essential to qualitative inquiry in leadership studies.
- Conference Article
- 10.54941/ahfe1005930
- Jan 1, 2025
- AHFE international
Generative AI (GAI) is reshaping the future of work in architecture by introducing innovative ways for humans to interact with technology, transforming the design process. In education, GAI offers students immersive environments for iterative exploration, enabling them to visualize, refine, and present design concepts more effectively. This paper investigates how GAI, through a structured framework, can enhance the learning of design tasks in elaborating interior design proposals, and preparing students for the evolving professional landscape. Drawing on the platform Midjourney, students explored concepts, material moodboards, and spatial compositions, simulating professional scenarios. Each student was assigned a real client and tasked with developing tailored design solutions, guided by client and tutor feedback. This approach demonstrates how GAI supports the development of future-oriented skills, directly linking education to the technological shifts in professional practice (Araya, 2019). The study adopts a practice-based methodology, documenting the outcomes of an interior design workshop where students employed GAI tools to develop client-specific proposals. Students engaged in role-playing, meeting their assigned clients face-to-face to gather requirements, acting as junior architects. They analyzed client feedback to inform the design phase, after which they used a structured framework for better using GAI to iteratively refine their proposals. By generating AI-assisted visualizations of spatial configurations and materials, students developed final design solutions that aligned with client expectations. Data from GAI iterations, client feedback, and tutor evaluations were used to assess how effectively AI tools contributed to producing professional-quality designs (Schwartz et al., 2022). Two research questions frame this investigation: (1) How does Generative AI enhance students' ability to create client-specific interior design solutions, from concept generation to final visualization, within a structured educational framework? (2) How does the integration of GAI tools impact the teaching of iterative design processes in architecture, particularly in preparing students for the future of work in the profession? The findings reveal that GAI significantly improved students' design outcomes by enabling them to visualize and refine their proposals based on real-world scenarios. GAI facilitated the exploration of current trends and supported the creation of material moodboards and space visualizations. The iterative nature of AI tools allowed students to better grasp the relationships between spatial configurations, design choices, and client needs. Their final proposals, incorporating AI-generated outputs, were praised for their conceptual clarity and technical precision, reflecting how AI-driven processes can transform traditional workflows (Burry, 2016). This study illustrates the transformative potential of GAI in architectural education, particularly in fostering dynamic human-technology interactions. By leveraging AI, students maintained control over outputs while transforming abstract concepts into client-ready designs. Moreover, the iterative feedback loop enabled by GAI promoted a more adaptive and responsive learning process, giving students real-time insights into their design decisions. These insights reflect broader changes in the future of work, where AI-driven tools will become integral to professional practice. Future research could explore expanding GAI’s role in more complex design stages, such as schematic design and development, building on the benefits observed in this study.
- Research Article
- 10.3126/kjmr.v3i3.87215
- Dec 12, 2025
- Kalika Journal of Multidisciplinary Research
This systematic review investigates the ethical challenges and strategic responses surrounding the use of Generative AI (GenAI) and related tools in academic writing within global higher education. Following the PRISMA 2020 framework, a rigorous search and screening process across academic databases identified 18 peer-reviewed articles published between 2020 and 2025, which were subjected to in-depth thematic analysis. The findings reveal four major ethical concerns: threats to academic integrity through plagiarism, authorship misrepresentation, and diminished originality; issues of bias and fairness arising from algorithmic limitations and unequal access to technology; limited transparency due to nondisclosure of AI use and the absence of clear citation standards; and risks to data privacy linked to the use of student and proprietary information. In response, the literature highlights strategies that include the development of institutional ethical guidelines and policies, enhanced digital literacy and training for faculty and students, improved design and regulation of AI tools with embedded ethical safeguards, and the promotion of transparent human–AI collaboration guided by human oversight. This review demonstrates the significance of adopting a comprehensive, multi-layered approach rather than relying on isolated interventions. For educators, it underscores the need to cultivate critical digital literacy skills; for policymakers, it emphasizes the importance of enforceable and context-sensitive frameworks; and for researchers, it points to future inquiry on the ethical–technological nexus. Collectively, the findings provide actionable insights to ensure that GenAI’s integration into academic writing supports integrity, fairness, and trust in higher education.
- Research Article
8
- 10.47760/cognizance.2024.v04i10.001
- Oct 30, 2024
- Cognizance Journal of Multidisciplinary Studies
Thus, the appearance of the Generative Artificial Intelligence opened up a great turn in many areas, including education and creative industries. This paper seeks to understand the deep impact that Generative AI is going to have on learning and creating processes for the social context of Generation Z (Gen Z) students – born in digital culture. The work looks into the possibilities and challenges that Gen Z in collaboration with Generative AI leads to the future of learning and creativity. This paper is relevant as it offers some understanding of the ongoing changes in the education and creativity together with the escalating growth in technology. The nature of the association between the members of Generation Z and the Generative AI needs to be known by the educational stakeholders, policymakers, and business executives to leverage value from the existing and upcoming technologies together with dealing with possible negative impacts. The purpose of this study was to explore the nature and uses of Generative AI, and its effects on the learning and creativity of Gen Z, in addition to identifying the advantages, disadvantages, opportunities, and risks/partities’ concerns that are commensurate with the integration of this technology in teaching/learning and creative processes. To achieve the objectives of the study the following research methodology was used: The research used both a literature review and documentary research. The materials used included academic publications, Industry reports, books and other credible internet sources on Generative AI and its impact on the education and creativity of the Gen Z. The document analysis included policy papers, educational technology reports, case studies and white papers from academic and professional bodies as well as other industries that involve Generative AI. Several insights show that using Generative AI can positively impact learners’ experiences, engagement, and creativity. However, there was some controversy about the excessive usage of AI and claimed that because of it people may get worse at critical thinking. The following were noted to be major concerns; Ethical Issues: they included issues to do with bias in the algorithms as well as the right to privacy of data. Thus, the findings of this research point to a three-way settlement with respect to the use of Generative AI in education and creative industries. It underlines the guideline of how human creativity and critical thinking ought to be sustained, while using AI tools. Proposals include, the need to teach critical thinking alongside AI use, fostering ethical AI consciousness, surged AI education, appropriate non-ethnical AI data set, strong AI policies and pro positive AI inspires and creative constructive use. The research implications for future studies include studying the changes in the achievement of learning outcomes over a period of time, wherein Generative AI has been incorporated and understanding how this technology influences different learning styles and needs, the issues of ethical and privacy concern, the requirement of professional development to educators in relation to Generative AI and finally, the comparison information and communication technology for learning between different cultures. Related to that, further studies on the effectiveness of AI in approaches like collaborative learning, its potential on preparing learners for employment, and on the psychology of students would be helpful in informing the future advancement of Generative AI in school and particular creative areas.
- Research Article
- 10.65106/apubs.2025.2774
- Nov 28, 2025
- ASCILITE Publications
The rapid rise of Generative AI (GenAI) tools is reshaping conversations about assessment and feedback in higher education. While much institutional attention focuses on detection, compliance, and academic integrity (Cotton et al., 2024), this presentation shifts the lens to educators and how they are actually using GenAI in assessment practice. We present findings from a grant-funded initiative at UNSW that explores educator-led innovation through a Postcards of Practice approach. The Postcards of Practice are one-page, practice-based narratives where educators document their use of GenAI tools. These postcards highlight applications including formative feedback generation, student prompting literacy, assessment redesign, and co-creation with AI. They reveal how educators are experimenting with GenAI to support student learning while navigating ethical concerns, transparency, and pedagogical alignment. Our study uses a qualitative interpretive methodology, combining thematic analysis of the postcards with follow-up interviews. The analysis draws on theoretical frameworks including feedback literacy (Carless & Boud, 2018), dialogic assessment (Nicol, 2010), and new paradigm feedback design (Winstone & Carless, 2020). We also apply institutional and national GenAI guidelines (Liu & Bridgeman, 2023; Perkins, 2023) to surface shared values such as authenticity, inclusivity, and responsible innovation that guide educators’ decisions. The aim of this study is to explore how educators are experimenting with GenAI in assessment and feedback, and to capture their emerging practices and reflections through the Postcards of Practice initiative. The central research question guiding this work is: How are educators integrating GenAI into assessment and feedback, and what opportunities, challenges, and support needs arise from these practices? This work advances Technology Enhanced Learning (TEL) by providing empirical insights into how GenAI is actually integrated at the coalface of teaching. Educators describe how GenAI supports more frequent, personalised feedback and builds student agency in learning. At the same time, they raise concerns about over-reliance, AI hallucination, and the need for clear pedagogical scaffolding. These reflections point to the need for professional development that is discipline-sensitive, responsive, and grounded in practice. The postcard approach also functions as a professional learning intervention. It prompts reflection, encourages cross-disciplinary dialogue, and helps build a local community of practice around GenAI use. Through this model, we demonstrate an innovative and scalable method of capturing and supporting TEL innovation in real time. The findings suggest GenAI is prompting a rethinking of assessment: from summative, compliance-driven models to more transparent, formative, and student-centred designs. Educators begin to embed feedback literacy, ethical AI use, and critical prompting into their teaching, with clear implications for program-level assessment and graduate capability development. To strengthen clarity, we propose a concise diagram mapping the emerging practices captured in the postcards against the theoretical frameworks of feedback literacy, dialogic assessment, and new paradigm feedback design. This visual representation illustrates how practical insights align with, extend, or challenge these frameworks, making the study’s contribution accessible across diverse tertiary contexts. This proposal offers exemplary innovation in TEL by foregrounding bottom-up, practice-led experimentation with GenAI. It is grounded in strong theoretical frameworks and applicable across diverse tertiary contexts. The Pecha Kucha format will present key insights through rich visual storytelling, including excerpts from the postcards themselves. We conclude by proposing future directions for research and institutional strategy, including how to embed GenAI into assessment ecosystems in ways that enhance learning, uphold integrity, and empower educators to lead digital transformation from within.
- Research Article
2
- 10.3390/technologies13020077
- Feb 12, 2025
- Technologies
This study introduces an Artificial Intelligence framework based on the Deep Learning model Bidirectional Encoder Representations from Transformers framework trained on a dataset from 2000–2023. The AI tool categorizes articles into six classes: Contactology, Low Vision, Refractive Surgery, Pediatrics, Myopia, and Dry Eye, with supervised learning enhancing classification accuracy, achieving F1-Scores averaging 86.4%, AUC at 0.98, Precision at 87%, and Accuracy at 86.8% via one-shot training, while Epoch training showed 85.9% Accuracy and 92.8% Precision. Utilizing the Artificial Intelligence model outputs, the Autoregressive Integrated Moving Average model provides forecasts from all classes through 2030, predicting decreases in research interest for Contactology, Low Vision, and Refractive Surgery but increases for Myopia and Dry Eye due to rising prevalence and lifestyle changes. Stability is expected in pediatric research, highlighting its focus on early detection and intervention. This study demonstrates the effectiveness of AI in enhancing diagnostic precision and strategic planning in optometry, with potential implications for broader clinical applications and improved accessibility to eye care.
- Research Article
58
- 10.1515/omgc-2023-0023
- Jun 19, 2023
- Online Media and Global Communication
Study purpose This study explores the usage of generative AI tools by journalists in sub-Saharan Africa, with a focus on issues of misinformation, plagiarism, stereotypes, and the unrepresentative nature of online databases. The research places this inquiry within broader debates of whether the Global South can effectively and fairly use AI tools. Design/methodology/approach This study involved conducting interviews with journalists from five sub-Saharan African countries, namely Congo, DRC, Kenya, Tanzania, Uganda, and Zambia. The objective of the study was to ascertain how journalists in sub-Saharan Africa are utilizing ChatGPT. It is worth noting that this study is a component of an ongoing project on AI that commenced on September 19, 2022, shortly after receiving IRB approval. The ChatGPT project was initiated in January 2023 after discovering that our participants were already employing the Chatbot. Findings The study highlights that generative AI like ChatGPT operates on a limited and non-representative African corpus, making it selective on what is considered civil and uncivil language, thus limiting its effectiveness in the region. However, the study also suggests that in the absence of representative corpora, generative AI tools like ChatGPT present an opportunity for effective journalism practice in that journalists cannot completely rely on the tools. Practical implications The study emphasizes the need for human agencies to provide relevant information to the tool, thus contributing to a global database, and to consider diverse data sources when designing AI tools to minimize biases and stereotypes. Social implications The social implications of the study suggest that AI tools have both positive and negative effects on journalism in developing countries, and there is a need to promote the responsible and ethical use of AI tools in journalism and beyond. Originality/value The original value of the study lies in shedding light on the challenges and opportunities associated with AI in journalism, promoting postcolonial thinking, and emphasizing the importance of diverse data sources and human agency in the development and use of AI tools.
- Research Article
- 10.35631/ijemp.725017
- Jun 30, 2024
- International Journal of Entrepreneurship and Management Practices
L2 learners at higher education often face difficulties in writing business emails in English, which hinder effective workplace communication and academic success. Therefore, higher education institutions should educate their learners about business email literacy. This study analysed how thirty-one business degree Malay students at a public university in Malaysia utilized generative AI tools for composing business emails. These participants presented their reflection investigation on the application of generative AI in business email writing for their group class assignment for English for Business Communication. This study's research approach included a qualitative document analysis on PowerPoint presentation slides of the participants, and thematic analysis was used to analyse the data. The findings reveal three themes emerged from the study; Theme 1: Preferred generative AI, Theme 2: Optimising generative AI prompts for business email writing, and Theme 3: Ethical usage of AI. Theme 1 has three sub-themes; user friendliness, relevant business contexts and quality generated texts. The results showed that all groups had different AI tools due to their personal choices. Theme 2 has two sub-themes; prompts for external email and prompts for internal email. The participants used generative AI tools for idea expansion and paraphrasing. These L2 learners also wrote specific prompts for different types of email. Two emerging sub-themes of Theme 3 are writing assistance and best practices on ethical usage of generative AI. The participants stressed the significance of understanding plagiarism and effectively using generative AI tools. Learners should be educated on intellectual property and ethical AI tool usage. Higher education institutions should integrate these tools into their courses to enhance business email writing skills and prepare students for AI-driven workplaces, fostering ethical and effective usage.
- Conference Article
- 10.28945/5536
- Jan 1, 2025
Aim/Purpose To address the gap in students’ effective use of generative AI tools, this paper presents a framework to introduce university students to the principles and practices of prompt engineering – the art and science of crafting precise and purposeful inputs to guide LLMs in generating accurate and useful outputs. This paper aims to equip students with strategies to interact meaningfully with AI chatbots for academic success. Background Generative AI tools, like ChatGPT, are widely adopted in educational settings, yet many students lack the skills to harness their full potential. This paper introduces prompt engineering as a critical competency for students to develop both technical proficiency and critical thinking. Methodology The paper provides a structured framework for teaching prompt engineering in university courses. It draws on existing literature, practical applications, and pedagogical strategies to guide educators in integrating generative AI effectively into their university courses. Contribution This paper contributes to the body of knowledge by presenting a comprehensive framework for teaching prompt engineering. It highlights prompt engineering’s role in enhancing AI literacy and preparing students for technology-driven academic and professional environments. Findings Prompt engineering enhances students’ ability to generate precise and relevant outputs from AI tools by supporting student development of communication strategies tailored to large language models. This guide introduces essential concepts and skills that facilitate effective interaction with AI chatbots. Structured instruction in prompt engineering helps to foster critical thinking, problem-solving, and reflective interaction – key competencies for navigating an AI-driven environment. Additionally, integrating prompt engineering into education improves AI literacy, enabling students to tackle complex tasks and apply AI tools effectively across various disciplines. Recommendations for Practitioners Educators should integrate structured, prompt engineering instruction into their courses, emphasizing its interdisciplinary applications. Scaffolded learning will help students develop competency in applying prompt engineering techniques and strategies. Recommendations for Researchers Future studies should explore the long-term impact of prompt engineering instruction on academic performance and professional readiness. Additionally, research should examine its effectiveness across diverse disciplines. Impact on Society Teaching prompt engineering equips students with essential AI literacy skills, fostering responsible and innovative use of AI in academic, professional, and societal contexts. This contributes to a workforce better prepared for the challenges of the AI era. Future Research Further research should examine the integration of multimodal AI tools alongside prompt engineering to assess how combined approaches can enhance learning outcomes. In addition, studies should investigate the effective-ness of various instructional designs to identify best practices for promoting student engagement and skill development. Exploring discipline-specific and pedagogically meaningful student use cases will also be essential to guiding the thoughtful integration of AI tools across diverse educational contexts.
- Research Article
- 10.1353/lib.2025.a961200
- Feb 1, 2025
- Library Trends
Abstract: This study examines how librarians are using third-party generative AI (GAI) tools such as ChatGPT to aid their daily professional tasks. An online survey of 272 librarians found that text-generating AI tools were the most popular. The majority of respondents felt that GAI tools were effective in improving productivity. Key challenges included ensuring content accuracy and designing effective prompts. Top suggestions for better preparing librarians to use GAI include practical training on using GAI, establishing AI policies and guidelines, fostering collaboration and communities of practice, and providing access to useful GAI resources. The study highlights popular use cases that can inform professional development, while underscoring the need for hands-on training, institutional policies, opportunities to experiment with GAI, and access to enhanced tools. As GAI evolves, supporting librarians’ adoption will be crucial for harnessing its potential benefits.
- Research Article
27
- 10.1007/s11930-024-00397-y
- Dec 4, 2024
- Current Sexual Health Reports
Purpose of ReviewMillions of people now use generative artificial intelligence (GenAI) tools in their daily lives for a variety of purposes, including sexual ones. This narrative literature review provides the first scoping overview of current research on generative AI use in the context of sexual health and behaviors.Recent FindingsThe review includes 88 peer-reviewed English language publications from 2020 to 2024 that report on 106 studies and address four main areas of AI use in sexual health and behaviors among the general population: (1) People use AI tools such as ChatGPT to obtain sexual information and education. We identified k = 14 publications that evaluated the quality of AI-generated sexual health information. They found high accuracy and completeness. (2) People use AI tools such as ChatGPT and dedicated counseling/therapy chatbots to solve their sexual and relationship problems. We identified k = 16 publications providing empirical results on therapists’ and clients’ perspectives and AI tools’ therapeutic capabilities with mixed but overall promising results. (3) People use AI tools such as companion and adult chatbots (e.g., Replika) to experience sexual and romantic intimacy. We identified k = 22 publications in this area that confirm sexual and romantic gratifications of AI conversational agents, but also point to risks such as emotional dependence. (4) People use image- and video-generating AI tools to produce pornography with different sexual and non-sexual motivations. We found k = 36 studies on AI pornography that primarily address the production, uses, and consequences of – as well as the countermeasures against – non-consensual deepfake pornography. This sort of content predominantly victimizes women and girls whose faces are swapped into pornographic material and circulated without their consent. Research on ethical AI pornography is largely missing.SummaryGenerative AI tools present new risks and opportunities for human sexuality and sexual health. More research is needed to better understand the intersection of GenAI and sexuality in order to a) help people navigate their sexual GenAI experiences, b) guide sex educators, counselors, and therapists on how to address and incorporate AI tools into their professional work, c) advise AI developers on how to design tools that avoid harm, d) enlighten policymakers on how to regulate AI for the sake of sexual health, and e) inform journalists and knowledge workers on how to report about AI and sexuality in an evidence-based manner.
- Research Article
3
- 10.14742/apubs.2023.514
- Nov 28, 2023
- ASCILITE Publications
Educators are wrestling with the changes wrought by generative AI (GenAI), particularly the widespread adoption of ChatGPT. This paper introduces creative and collaborative sensemaking with GenAI as an alternative form of academic and professional development to spark reflection on the implications of this technology for educators and to increase GenAI literacy. By combining human and AI-generated text in iterative loops, we created a text and a creative process to collectively investigate the use of GenAI in education. Collaborative poetic inquiry, an arts-based research method, was used in tandem with generative experiments using AI tools, culminating in an ode to collaborative sensemaking. Drawing on the authors’ collective experience as a group of educational professionals and academics, we then critically analysed how GenAI may impact educators and augment creative practices to generate new insights. Further implications for practice from this sensemaking with GenAI in education are discussed.