Gender and the Algorithmic Future: Post-Conventional Perspectives on Generative AI in Higher Education
ABSTRACT This article contributes to the Public and Post-Conventional Anthropologies special issue by examining how generative artificial intelligence (GenAI) technologies are being integrated into higher education, using the California State University (CSU) system – and particularly San Diego State University (SDSU) – as a case study. Drawing on our 2024 student survey (n = 10,162), 48 brief interviews, and the authors’ combined 34 years of institutional experience at SDSU, this paper explores how gender may influence attitudes toward GenAI, including perceptions of bias, trust, and educational impact. The analysis highlights differences in how students interpret GenAI’s risks and benefits, and how institutional tools such as surveys can shape what perspectives are made visible or overlooked. Notably, responses from nonbinary participants suggest forms of epistemic vigilance shaped by lived marginalization, offering a critical lens through which to understand how institutional technologies risk reinforcing existing inequities. Framed within the context of public and post-conventional anthropology, this article approaches GenAI implementation not just as a technical or administrative shift, but as an ethical and political project. Through interdisciplinary collaboration, it offers concrete recommendations for improving GenAI-related policy, training and survey design to support more effective and inclusive implementation across university settings.
- Research Article
- 10.1353/pcg.2022.0000
- Jan 1, 2022
- Yearbook of the Association of Pacific Coast Geographers
Report of the Eighty-Third Annual MeetingSan Diego, California October 14–16, 2021 Liz Ridder, Atsushi Nara, and Yolonda Youngs (bio) After a one-year meeting hiatus due to Covid-19, APCG members gleefully gathered, in-person and online, for the Eighty-Third Annual Meeting at San Diego State University (SDSU). Much to the organizers' relief, 149 people pre-registered for the meeting, with 66 online and 83 in-person attendees, plus several folks who registered in person. This year's meeting was unique for many reasons and was aptly themed "Geographies of Transition." Organizers Atsushi Nara, Liz Ridder, and Yolonda Youngs, each from a different California State University (CSU)—SDSU, CSU San Marcos, and CSU San Bernardino—banded together to bring a hybrid meeting format to APCG without losing long-standing APCG meeting traditions. This "experimental" year was part of a larger initiative organized by the AAG and its Climate Action Task Force as part of their Regions Connect: A Joint Climate Forward Initiative, which may lead to larger regional meetings in the future, to reduce carbon emissions of AAG events. As part of meeting registration, attendees could virtually attend the streaming and recorded sessions of the Applied Geography Conference and the AAG Regional Division meetings of the Southwest (SWAAG), East Lakes, West Lakes, New England-St. Lawrence Valley (NESTVAL), and Great Plains/Rocky Mountains. Concurrent paper and poster sessions took place Thursday and Friday in the Conrad Prebys Aztec Student Union. Also known as the Union, the building opened in 2013, replacing the first CSU student union, the Aztec Center. The construction of the Union reused and recycled approximately eighty percent of the Aztec Center's materials, and the building is LEED Double Platinum Certified. The recent renaming of one of the meeting rooms created momentary confusion until a clever geographer updated the door sign to match the new name printed in the meeting program and campus maps. Once [End Page 163] meeting room locations were sorted, thirty-one in-person and seventeen virtual papers and eight posters were presented by authors from Arizona, California, Michigan, Washington, Oregon, Nevada, Florida, Alabama, and Germany. On Thursday afternoon, Keynote Speaker Dr. Marilyn Raphael, Professor, UCLA, Director of the UCLA Institute of the Environment and Sustainability, and Vice President of the AAG, presented "Antarctic Sea Ice—How Important Is It?" highlighting regional patterns of spatial, spectral, and temporal variability, including the timing of advance and retreat, and positive and negative growth trends of sea ice throughout the Antarctic. These patterns are likely related to Antarctica's geography and the influence of the ocean and atmosphere, which are expected to change as the atmosphere continues to change. Friday's Keynote Lecture by Dr. Park Williams continued discussions on "Geographies of Transition" and the impacts of climate change in his talk, "The effect of climate change on water, wildfire, and life across North America." This year's conference also featured two special sessions on Friday afternoon. Dr. Atsushi Nara organized and moderated a hybrid-mode interactive workshop to identify perceptions about the skills and knowledge to succeed in geocomputation-related careers. The project is an NSF-funded collaboration through an Encoding Geography Researcher-Practitioner Partnership (RPP) led by AAG, SDSU, San Diego Mesa Community College, Sweetwater High School Union District, Texas State University, and UC Riverside. Dr. Dan Arreola organized and moderated a discussion panel titled "Donald W. Meinig's Southwest at Half-Century, A Reflection and Appreciation." Former Meinig students and scholars of his work Bill Wyckoff, Craig Colten, Paul Starrs, and Richard Nostrand shared their perspectives on Meinig's influence on their interpretations of the Southwest and the formation of their geographical perspectives. Social events such as the Women's Network and Graduate Student lunches provided opportunities to celebrate students and connect with new colleagues and old friends. The Thursday-night reception on the Union's 3rd Floor Terrace included wonderful food and drink, supplemented [End Page 164] by a spectacular view and live music from the courtyard below. The highlights of Friday night's awards banquet were the numerous student awards for outstanding papers and posters and student travel scholarships, presenting Chris Lukinbeal with the Distinguished Service...
- Dataset
- 10.1377/forefront.20141020.042043
- Oct 20, 2014
Editor's note: As we approach the beginning of the second open enrollment period under the Affordable Care Act, Walter Zelman describes an effort he led during last year's initial open enrollment period to enroll students in the California State University system in coverage. Part 1 below provides background on the CSU system and the enrollment effort, the CSU Health Insurance Education Project, as well as a discussion of what went well. Part 2, which will appear tomorrow, addresses what did not go so well, as well as project results, lessons and policy implications, and next steps. In addition to Zelman, authors of this post include Wendy Lee, now in a Masters of Public Health Program at Johns Hopkins; Natasha Buransombati, now in a graduate program in Nursing and Public Health at the University of Seattle in Washington; and Carla Bracamonte, now in an MPH program at California State University, Fullerton. As CSU students, Lee and Buransombati served as regional coordinators for HIEP and Bracamonte served as a coordinator, CSU Los Angeles. The California State University (CSU) system is the largest public university system in the nation, as well as one of the most diverse. The CSU Health Insurance Education Project (HIEP) received a $1.25 million grant to educate students in the CSU system about the Affordable Care Act and health coverage options through California’s new marketplace, Covered California. A pre-open enrollment, multi-campus poll found that approximately 25-30 percent of CSU students were uninsured, primarily because they could not afford insurance. The project placed student educators on the CSU’s 15 largest campus. Over a seven-month period they gave approximately 1500 classroom presentations, and conducted 70 forums and 300 enrollment events. University administrators sent out over 1 million emails to CSU students. Project strategy emphasized a focus on affordability, the need for insurance (accidents happen), and the simplicity of the enrollment process.
- Research Article
6
- 10.1108/03074800910941347
- Jan 1, 2009
- New Library World
PurposeThe purpose of this paper is to present a virtual library plan created by library directors of the 23 California State University (CSU) system campuses. The information literacy portion of the project offers a repository of high quality interactive digital learning objects (DLOs) in the MERLOT repository. Therefore, DLOs created locally at the Dr Martin Luther King, Jr Library at San José State University (SJSU) focus on topics that supplement the “core” DLO collection.Design/methodology/approachThis case study presents planning assumptions for developing local content that complements a California State University (CSU) system collection of high quality interactive information literacy DLOs. The authors also offer suggestions from the professional literature that guide their application of such Web 2.0 tools as wikis, podcasts, and tagging to create supplemental learning modules for their local information literacy instruction program.FindingsWeb 2.0 Digital Learning Objects are essential components of an efficient academic information literacy program comprised of face‐to‐face and “on demand” virtual approaches. The California State University (CSU) system has identified a core set of DLOs, which are easily available in the MERLOT open access repository. Local development efforts, then, focus on the design and creation of DLOs of local significance.Practical implicationsLibrarians at the Dr Martin Luther King, Jr Library in San José, California, USA, are developing local content for Web 2.0‐enabled information literacy instruction. These developments occur within the context of a 23 campus initiative, originating at the Chancellor's Office, which has identified high quality information literacy digital learning objects (DLOs). This core open access collection intends to fulfill academic libraries'core instructional needs and is freely available to any library through the open access MERLOT repository by any libraryOriginality/valueThis paper recommends an approach for local production of virtual information literacy content which benefits from harvesting the “best of the best” currently available on the internet.
- Research Article
- 10.47760/cognizance.2026.v06i01.001
- Jan 30, 2026
- Cognizance Journal of Multidisciplinary Studies
The rapid advancement of generative artificial intelligence (GenAI) has reshaped the landscape of research supervision in higher education, raising critical ethical, pedagogical, and methodological questions. This mixed-methods study, titled Generative AI as the “Third Eye” of Academic Advising: Ethical, Pedagogical, and Methodological Perspectives in Research Supervision, investigated how higher education research advisers integrate GenAI tools into the supervision of student research. Employing a convergent parallel mixed-methods design, the study gathered data from 30 research advisers across various academic disciplines in a private higher education institution in Laguna, Philippines. The quantitative component utilized a validated 41-item, four-point Likert scale questionnaire designed to measure the extent to which GenAI influences ethical standards, pedagogical practices, methodological decisions, and perceptions of authenticity and originality. The qualitative phase, on the other hand, involved semi-structured interviews with purposively selected participants to explore their lived experiences, ethical reflections, and policy perspectives regarding GenAI use. Quantitative results revealed that advisers uphold ethical principles, adopt transformative pedagogical practices, and integrate GenAI tools into methodological decision-making to a great extent (M = 3.54). Qualitative findings supported these results, generating themes such as enhanced efficiency and precision, heightened ethical vigilance, challenges in verifying originality and authorship, and the continued importance of human judgment in AI-assisted supervision. Despite these advancements, issues related to student overreliance, citation transparency, and unverifiable references were identified as persistent challenges. The study culminates in the development of a Proposed Institutional Policy Framework on the Ethical Use of Generative AI in Research Supervision, advocating disclosure protocols, AI literacy training, and verification mechanisms. This framework positions GenAI as a “third eye” that enhances rather than replaces human discernment. Overall, the study contributes actionable insights for policymakers, educators, and institutions, ensuring that AI-driven innovation aligns with ethical integrity, academic rigor, and responsible research mentorship.
- Dataset
- 10.1377/forefront.20141021.042087
- Oct 21, 2014
Editor's note: As we approach the beginning of the second open enrollment period under the Affordable Care Act, Walter Zelman describes an effort he led during last year's initial open enrollment period to enroll students in the California State University (CSU) system in coverage. Part 1 of this post provided background on the CSU system and the enrollment effort, the CSU Health Insurance Education Project, as well as a discussion of what worked well. Part 2, below, addresses what worked less well, as well as project results, lessons and policy implications, and next steps. In addition to Zelman, authors of this post include Wendy Lee, now in a Masters of Public Health Program at Johns Hopkins; Natasha Buransombati, now in a graduate program in Nursing and Public Health at the University of Seattle in Washington; and Carla Bracamonte, now in an MPH program at California State University, Fullerton. As CSU students, Lee and Buransombati served as regional coordinators for HIEP and Bracamonte served as a coordinator, CSU Los Angeles. IV. What Worked Less Well Assessments as to what did not work must be rendered with caution. In most cases lack of success may have been due to lack of emphasis or time, to the relative inexperience of student educators, or the failure of project leaders to follow-up aggressively with CSU or administrative personnel. Campus groups, social media, and web pages Most striking and disappointing, was the difficulty in engaging campus groups. Many seemed supportive of the mission. But, in the end, most were unable to commit time and resources to the project, even after repeated engagement by project representatives. Most campus groups had specific goals and agendas, and promoting insurance coverage to students was not one of them. More time or resources might have produced more campus organization support, but these were not available.
- Research Article
33
- 10.1080/03075079.2024.2327003
- Mar 9, 2024
- Studies in Higher Education
Recent emergence of generative artificial intelligence (GenAI) technology has stimulated interests as well as concerns in their potential in teaching and learning. Situated in the new and transforming context, this study provides an avenue for students to introspectively explore their use of GenAI in a postgraduate course. Seventy-four students from three Chinese universities participated in this study. By analyzing student interviews conducted pre- and post-course, alongside their chat logs with GenAI and reflective journal entries detailing their learning approaches, the research uncovers a spectrum of student perspectives on GenAI’s impact, ranging from beneficial optimism, to cautious skepticism and adaptable pragmatism. Notably, student agency is identified as a crucial element in relation to these themes. This was articulated in four types of learning activities: receptive, resistive, resourceful, and reflective. The research underscores the importance of supporting and empowering student agency in the learning approaches aided by GenAI in education, highlighting its role in optimizing its use and enhancing autonomous, lifelong learning skills amidst the evolving technologically advanced learning landscape.
- Research Article
41
- 10.3389/bjbs.2024.14048
- Jan 9, 2025
- British journal of biomedical science
Generative Artificial Intelligence (GenAI) is rapidly transforming the landscape of higher education, offering novel opportunities for personalised learning and innovative assessment methods. This paper explores the dual-edged nature of GenAI's integration into educational practices, focusing on both its potential to enhance student engagement and learning outcomes and the significant challenges it poses to academic integrity and equity. Through a comprehensive review of current literature, we examine the implications of GenAI on assessment practices, highlighting the need for robust ethical frameworks to guide its use. Our analysis is framed within pedagogical theories, including social constructivism and competency-based learning, highlighting the importance of balancing human expertise and AI capabilities. We also address broader ethical concerns associated with GenAI, such as the risks of bias, the digital divide, and the environmental impact of AI technologies. This paper argues that while GenAI can provide substantial benefits in terms of automation and efficiency, its integration must be managed with care to avoid undermining the authenticity of student work and exacerbating existing inequalities. Finally, we propose a set of recommendations for educational institutions, including developing GenAI literacy programmes, revising assessment designs to incorporate critical thinking and creativity, and establishing transparent policies that ensure fairness and accountability in GenAI use. By fostering a responsible approach to GenAI, higher education can harness its potential while safeguarding the core values of academic integrity and inclusive education.
- Research Article
- 10.64938/bijri.v9n4.25.jl040
- Jul 28, 2025
- BODHI International Journal of Research in Humanities, Arts and Science
The study aimed to assess the perceptions and knowledge of university students in India regarding the application of Generative Artificial Intelligence (GenAI) in higher education. The research involved 200 students from two educational institutes in the Northeast and Western regions, using stratified random sampling techniques. The data was collected in-person and analyzed using SPSS software.The majority of respondents were middle-aged (55.5%), with most being female (73.6%) and male (24.4%). Most were in undergraduate programs (66.2%), with a high percentage (85.5%) from FFCSc, MSU, Gujarat, followed by CCSc, AAU, Assam (61.8%). Nearly half of the respondents fell into the category of average achievers (58.2%), with 59.1% using GenAI for academic purposes. The majority (64.1%) were knowledgeable (68%, 60%) from CCSc, AAU, Assam, and FFCSc, MSU, Gujarat. Overall, over half of the respondents (54.5%) had favorable perceptions of GenAI applications in higher education, with slightly more respondents from CCSc, AAU, Assam than from FFCSc, MSU, Gujarat (53.6%). However, almost half (49%) perceived GenAI applications with more benefits in higher education. Institute-wise, more than half (51.8%) from CCSc, AAU, Assam, and 46.4% from FFCSc, MSU, Gujarat perceived more benefits from GenAI applications in education. However, over half (56.4%) expressed greater concern about GenAI applications in higher education. The results underscore the need for policymakers to develop ethical guidelines and customized interventions on the responsible use of GenAI tools at different educational levels to prepare future human resources in line with market demands. Cooperation between educators, administrators, and policymakers is crucial to responsibly harness GenAI and address its concerns.
- Research Article
- 10.34190/icair.4.1.3026
- Dec 4, 2024
- International Conference on AI Research
In the current spring of Artificial Intelligence, the rapid development of Generative AI (GenAI) has initiated vivid discussions in higher education. Opportunities as well as challenges have been identified and to cope with this new situation there is a need for a large-scale teacher professional development. With basic skills about GenAI teachers could use the new technology as an extension of the existing technology enhanced teaching and learning. The aim of this paper is to present and discuss the project FAITH (Frontline Application of AI and Technology-enhanced Learning for Transforming Higher Education). FAITH is a higher education pedagogical development initiative for institutional development for teachers with good fundamental skills in traditional pedagogy. A project with the overall objective of increasing the staff understanding of AI and to develop new competencies in the field of GenAI and technology enhanced learning. The research question that guided this study was: "What are the perceived opportunities, challenges and expectations of involving GenAI in higher education?" The overall research strategy for the FAITH project is design-based research, which involves iterative and cumulative development processes. In the early iteration that this study was a part of has been carried out inspired by Collective Autoethnography where members of the steering group behind the FAITH project, and members of the project team have constituted the main focus group. Data were collected by structured interviews where two GenAI tools also have been interviewed. Findings show that the expectations are high, but that the FAITH ambition of institutional development is depending on teachers’ motivation for taking an active part in the project. Another challenge could be that many teachers see GenAI as something that threatens the current course design, and that a general ban of GenAI is the appropriate solution. One of, several identified opportunities, is that a general revision of syllabi and assessment in an adaptation for GenAI enhanced learning would improve the current course design.
- Research Article
37
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
55
- 10.1016/j.caeai.2024.100326
- Oct 30, 2024
- Computers and Education: Artificial Intelligence
The advancements in Generative Artificial Intelligence (GenAI) can provide opportunities for enriching educational experiences, but at the same time raise concerns regarding academic integrity. Many educators have expressed anxiety and hesitation when it comes to integrating GenAI in their teaching practices. Thus, recommendations and guidance from institutions are needed to support instructors in this new and emerging GenAI era. In response to this need, this study explores different U.S. universities' academic policies and guidelines regarding the use of GenAI tools (e.g., ChatGPT) for teaching and learning, and from there, gains understanding of how these universities respond and adapt to the development of GenAI in their academic contexts. Data sources include academic policies, statements, guidelines, and relevant resources provided by the top 100 universities in the U.S. Results show that the majority of these universities adopt an open but cautious approach towards GenAI. Primary concerns lie in ethical usage, accuracy, and data privacy. Most universities actively respond and provide diverse types of resources, such as syllabus templates, workshops, shared articles, and one-on-one consultations; focusing on a range of topics, namely general technical introduction, ethical concerns, pedagogical applications, preventive strategies, data privacy, limitations, and detective tools. The findings provide four practical pedagogical implications for educators when considering GenAI in teaching practices: 1) accepting GenAI presence, 2) aligning GenAI use with learning objectives, 3) evolving curriculum to prevent misuse of GenAI, and 4) adopting multifaceted evaluation strategies. For recommendations toward policy making, the article suggests two possible directions for the use of GenAI tools: 1) establishing discipline-specific policies and guidelines, and 2) managing students' sensitive information in a transparent and careful manner.
- Research Article
- 10.1177/003172170208300518
- Jan 1, 2002
- Phi Delta Kappan
Smaller class sizes set off a scramble to hire new teachers in California, putting pressure on teacher training institutions to take steps to meet the demand. Cal State Long Beach teacher educators took a deep breath and said, Count us in, but they refused to give up their focus on quality. authors tell the tale of what happened next. IN JULY of 1996, the California State University (CSU) system appointed a task force consisting of eight CSU presidents to develop a plan to revamp teacher education throughout the system. This Presidents Commission on Teacher Preparation and K-18 Education was charged with making recommendations to improve the quality of teacher education on the 23 CSU campuses. newly formed commission was clearly facing a formidable task when all of a sudden a new dimension was added. Then-Gov. Pete Wilson made a dramatic and unexpected announcement regarding K-12 education in the state. In a bold and widely popular move, the governor proposed immediate reduction in the size of classes in California public schools to a 20-to-1 student/teacher ratio for grades K-3, with the promise that more grades would be included in the reduction plan later. The class-size-reduction initiative was a direct response to declining test scores in the state.1 A growing population of school-age children and the mandated smaller classes fueled a scramble to hire more teachers in California. It was estimated that 30,000 new teachers would be needed in the first year and some 300,000 over the next 10 years. To help meet the demand, desperate school districts granted an unprecedented number of emergency permits to individuals who did not hold standard teaching credentials. Emergency permits allow college graduates in any discipline to teach for up to five years. This approach is designed to allow career- changing college graduates to begin teaching while they complete a teacher preparation program. Los Angeles County alone has had more than 7,000 teachers on emergency permits. Some $1.7 billion was spent to hire additional teachers during the first year of the class-size-reduction program, but there were some unexpected and undesirable consequences. To fill their classrooms, suburban and small-town districts were recruiting teachers from inner- city schools. In addition, many school districts had contractual seniority programs that allowed teachers in upper elementary grades and middle schools to transfer to the more attractive smaller classes in the primary grades - grades for which some were unprepared or underprepared. Colleges of education in the CSU system were under the gun, and expectations were running high for them to increase dramatically the number of qualified teachers in the state. A first charge to the education schools was to help those holding emergency permits to become fully certified, which was no small task given their different levels of preparation. Compounding the problem of the rapidly growing number of teachers on emergency permits was their very high attrition rate. Estimates are that half of the noncertified teachers quit after a single year, and, in some urban areas, the number has been as high as 70%. Nonetheless, the immediate problem for the CSU education programs was to see that all these emergency teachers were properly certified as quickly as possible. major charge to education programs, however, was to produce more college graduates who were fully certified and knew how to teach. Colleges of education found themselves in an almost impossible position. In California, the Ryan Act of 1970 does not allow colleges of education to offer undergraduate degrees in education. Prospective teachers must earn their degrees in other academic areas and then enroll in a teacher education program for a fifth year of study. While this may not be the best way to go about educating teachers, it is the law in California. This is the situation in which colleges of education found themselves. …
- Research Article
187
- 10.1186/s41239-024-00453-6
- Mar 25, 2024
- International Journal of Educational Technology in Higher Education
In recent years, higher education (HE) globally has witnessed extensive adoption of technology, particularly in teaching and research. The emergence of generative Artificial Intelligence (GenAI) further accelerates this trend. However, the increasing sophistication of GenAI tools has raised concerns about their potential to automate teaching and research processes. Despite widespread research on GenAI in various fields, there is a lack of multicultural perspectives on its impact and concerns in HE. This study addresses this gap by examining the usage, benefits, and concerns of GenAI in higher education from a multicultural standpoint. We employed an online survey that collected responses from 1217 participants across 76 countries, encompassing a broad range of gender categories, academic disciplines, geographical locations, and cultural orientations. Our findings revealed a high level of awareness and familiarity with GenAI tools among respondents. A significant portion had prior experience and expressed the intention to continue using these tools, primarily for information retrieval and text paraphrasing. The study emphasizes the importance of GenAI integration in higher education, highlighting both its potential benefits and concerns. Notably, there is a strong correlation between cultural dimensions and respondents’ views on the benefits and concerns related to GenAI, including its potential as academic dishonesty and the need for ethical guidelines. We, therefore, argued that responsible use of GenAI tools can enhance learning processes, but addressing concerns may require robust policies that are responsive to cultural expectations. We discussed the findings and offered recommendations for researchers, educators, and policymakers, aiming to promote the ethical and effective integration of GenAI tools in higher education.
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
- 10.14742/apubs.2024.1225
- Nov 11, 2024
- ASCILITE Publications
While industry practices evolve rapidly, marketing education in Australia and New Zealand faces challenges in keeping pace, particularly regarding the adoption of current marketing technologies (Harrigan et al., 2022). Generative AI, exemplified by systems like ChatGPT and DALL·E, has demonstrated benefits for learning (Baidoo-Anu & Ansah, 2023). However, despite its potential, there remains a dearth of practical guidance on effectively incorporating these technologies into marketing courses. This gap persists even as general frameworks for responsible and ethical AI use, such as the Australian Framework for Generative AI in Schools (2023), emerge. As the demand for graduates with generative AI skills grows in the job market, educators must explore innovative pedagogical approaches to bridge this gap. This academic poster presents an innovative application of generative artificial intelligence (GenAI) in the context of teaching digital marketing at the postgraduate level. Its purpose is to bridge the gap between academic theory and industry practice by encouraging educators to integrate AI tools into their curriculum through experiential learning pedagogy (Kolb, 2014), characterized by a learning process whereby knowledge is created through hands-on experiences. The poster exemplifies how various types of GenAI technologies — specifically text-based, image-based, and video-based — can enhance teaching content, tutorial exercises, and assessments within the digital marketing course. The poster showcases examples of how these GenAI tools are integrated in the course content, to guide students in generating innovative ideas for using AI in marketing to gain a competitive edge: Text-based GenAI: Tools like ChatGPT and Gemini can automatically generate search keywords for search engine marketing. By integrating text-based GenAI tools with established marketing technology (MarTech) tools such as Google Ads and Google Ads Keyword Planner, students engage in practical exercises that combine AI-generated initial ideas (e.g., search keywords) with further analysis (e.g., search volume, click-through rates, and bidding costs) using established MarTech tools. This hands-on approach enhances their learning experience and prepares them for real-world applications. Image-based GenAI: Platforms such as DALL·E, Midjourney, and Stable Diffusion enable the creation of custom images for display advertising, enhancing visual communication in marketing materials. Through experiential learning activities, students can explore ideas, seek unusual combinations, and inspire creativity faster with image-based GenAI tools, resulting in a greater variety of display ad materials. Video-based GenAI: Applications like Sora and Synthesia facilitate the production of short video clips suitable for social media marketing (e.g., YouTube Shorts, TikTok). By engaging in dynamic content creation exercises, students learn to streamline content creation, reduce manual work, and save both time and budget, thereby gaining practical skills in social media marketing. By incorporating these GenAI technologies through experiential learning pedagogy, educators can enrich the learning experience, foster critical thinking, and prepare students for the evolving landscape of digital marketing. Future research can study the use of GenAI in marketing education using theoretical frameworks such as the Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2016).
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.