Dialogic interactions between mathematics teachers and GenAI: multi-environment task design and its contribution to TPACK

  • Abstract
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This study observed how mathematics teachers interacted with Generative Artificial Intelligence (GenAI) while planning primary school geometry lessons that focus on inquiry-based learning tasks that simultaneously utilise two dynamic technological environments: Origametria (a computerised climate for learning geometry through paper folding) and GeoGebra (dynamic mathematics software). The study’s mixed-methods approach utilised pre- and post-course TPACK questionnaires, analysis of Curiosity Driven Discourse transcripts, GenAI dialogues recorded in the participants’ collaborative research journal, and examination of the inquiry tasks developed as part of the lesson plans. The two research questions were: (1) What are the characteristics of the teacher-GenAI dialogue when preparing inquiry-based geometry learning tasks while simultaneously utilising the two dynamic geometry environments? (2) How does this teacher-GenAI dialogue contribute to the teachers’ TPACK? The findings revealed two primary characteristics of the dialogue. We also found that the dialogue contributed to teachers’ TPACK development across all components. Nevertheless, the study shows that although GenAI can serve as a valuable tool for enhancing teachers’ professional development, its effectiveness depends on teachers’ ability to critically evaluate and adapt its suggestions in specific educational contexts and highlights the importance of maintaining a balance between AI capabilities and human expertise.

Similar Papers
  • Research Article
  • Cite Count Icon 14
  • 10.3389/bjbs.2024.14048
Generative AI in Higher Education: Balancing Innovation and Integrity.
  • Jan 9, 2025
  • British journal of biomedical science
  • Nigel J Francis + 2 more

Generative Artificial Intelligence (GenAI) is rapidly transforming the landscape of higher education, offering novel opportunities for personalised learning and innovative assessment methods. This paper explores the dual-edged nature of GenAI's integration into educational practices, focusing on both its potential to enhance student engagement and learning outcomes and the significant challenges it poses to academic integrity and equity. Through a comprehensive review of current literature, we examine the implications of GenAI on assessment practices, highlighting the need for robust ethical frameworks to guide its use. Our analysis is framed within pedagogical theories, including social constructivism and competency-based learning, highlighting the importance of balancing human expertise and AI capabilities. We also address broader ethical concerns associated with GenAI, such as the risks of bias, the digital divide, and the environmental impact of AI technologies. This paper argues that while GenAI can provide substantial benefits in terms of automation and efficiency, its integration must be managed with care to avoid undermining the authenticity of student work and exacerbating existing inequalities. Finally, we propose a set of recommendations for educational institutions, including developing GenAI literacy programmes, revising assessment designs to incorporate critical thinking and creativity, and establishing transparent policies that ensure fairness and accountability in GenAI use. By fostering a responsible approach to GenAI, higher education can harness its potential while safeguarding the core values of academic integrity and inclusive education.

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • 10.70777/si.v1i1.11101
Highlights of the Issue
  • Oct 15, 2024
  • SuperIntelligence - Robotics - Safety &amp; Alignment
  • Kristen Carlson

To emphasize the journal’s concern with AGI safety, we inaugurate Artificial General Intelligence (AGI) by focusing the first issue on Risks, Governance, and Safety &amp; Alignment Methods. Risks The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence The most comprehensive AI risk taxonomy — 777 specific risks classified into 43 categories — to date has been created by workers collaborating from a half-dozen institutions. We except 11 key pages from the original 79-page report. Their ‘living’ Repository is online and free to download and share. The authors’ intention is to provide a common frame of reference for AI risks. Slattery et al.’s set of ~100 references is excellent and thorough. Thus, pouring through this study for your own specific interest is an efficient way to get on top of the entire current AI risk literature. The highest of their three taxonomy levels, the Causal Taxonomy, is categorized according to the cause of the risk, Human or AI the intention , Intentional action or Unintentional, and timing — Pre-deployment of the AI system or Post-deployment. The Causal Taxonomy can be used “for understanding how, when, or why risks from AI may emerge.” They also call readers’ attention to the AI Incident Database.[1] The Incident Database publishes a monthly roundup here. AI Risk Categorization Decoded (AIR 2024) By examining 8 government and 16 corporate AI risk policies, Zeng et al. seek to provide an AI risk taxonomy unified across public and private sector methodologies. They present 314 risk categories organized into a 4-level hierarchy. The highest level is composed of System &amp; Operational Risks, Content Safety Risks, Societal Risks, and Legal &amp; Rights Risks. Their first takeaway from their analysis is more categories is advantageous, allowing finer granularity in identifying risks and unifying risk categories across methodologies. Thus, indirectly they argue for the Slattery et al. taxonomy with double the categories. This emphasis on fine granularity parallels a comment made to me by Lance Fortnow, Dean of Illinois Institute of Technology College of Computing, on the diversity and specificity of human laws indicating a similar diversity may be necessary to assure AGI safety, and that recent governance proposals may be simplistic. Indeed, Zeng et al.’s second takeaway is that government AI regulation may need significant expansion. Few regulations address foundation models, for instance. And their third takeaway is that comparing AI risk policies from diverse sources is extremely helpful to develop an overall grasp of the issues – how different organizations conceptualize risk, for instance – and how to move toward international cooperation to manage AI risk. AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies Applying the work just described above, Zeng et al. constructed an AI safety benchmark aligned with their unified view of private and public sector AI risk policy and specifically targeting the gap in regulation of foundation models they uncovered. They develop and test nearly 6000 risky prompts and find inconsistent responses across foundation models. Zeng et al. give examples of foundation model safety failures in response to various prompts. This work seems a significant advance toward an AGI safety certification conducted by an AI industry consortium or an insurance company consortium along the lines of, e.g., UL Solutions (previously Underwriters’ Laboratory). A Comprehensive Survey of Advanced Persistent Threat Attribution We wanted to publish this important article had to pull it due to a license conflict – please see their arXiv preprint. APT [Advanced Persistent Threat] attacks are attack campaigns orchestrated by highly organized and often state-sponsored threat groups that operate covertly and methodically over prolonged periods. APTs set themselves apart from conventional cyber-attacks by their stealthiness, persistence, and precision in targeting. This systematic review by Rani et al. of 137 papers focuses on the increasing development of automated means to detect AI and ML APTs early and identify the malevolent actors involved. They present the Automated Attribution Framework, which consists of 1) collecting the training data of past attacks, 2) preprocessing and enrichment of the training data, 3) the actual training and pattern recognition on the data, and 4) attribution — applying the trained models to identify the malevolent perpetrating actors. The open research questions summarized by Rani et al. lead toward AI taking an increasing role in APT attribution. Governance Excerpts from Aschenbrenner, Situational Awareness I was pointed to Leopold Aschenbrenner’s 165-page missive by Scott Aaronson’s blog, which said he knew Leopold during his sabbatical at OpenAI and recommended people give it a read and take it seriously. The essence of it is that if we extrapolate from recent AI progress, we will have AGI by 2030, and therefore, for national security, a Manhattan Project-style national AI effort, including nationalizing leading private AGI labs, should be mounted. Here we reprint his Part IV, “The Project,” advocating this controversial effort and describing his vision of how it will occur. I recommend anyone concerned about the dangers of AGI, and especially those working toward AGI, read Aschenbrenner’s entire book. Take a look at the Table of Contents preceding our reprint of “The Project.” And we reprint his Ch. V, “Parting Thoughts,” in our Commentary section. Soft Nationalization: How the US Government Will Control AI Labs Aschenbrenner advocates nationalizing leading AI labs into a high-security, top-secret, US federal government project. OK, how, exactly? A perfect complement to Aschenbrenner’s thoughts is given by Deric Cheng and Corin Katzke of Convergence Analysis. They examine how AGI R&amp;D nationalization could happen realistically, effectively, and efficiently. Their report outlines key issues and initial thoughts as a prelude to their own and others’ detailed proposals to come. It is a beautiful piece of work, IMHO. It is not impossible for private companies to develop AGI responsibly and securely, but the main goal of this journal is to make AGI safety the central debate in the AGI community, and the nationalized, Manhattan-style project point of view must be presented. Further, I find Aschenbrenner’s arguments to be persuasive and Cheng and Katzke’s thoughtful outline of how nationalization could actually occur to be convincing, e.g. (pg. 8): The US may be able to achieve its national security goals with substantially less overhead than total nationalization via effective policy levers and regulation… We argue that various combinations of the policy levers listed below will likely be sufficient to meet US national security concerns, while allowing for more minimal governmental intrusion into private frontier AI development. Acceptable Use Policies for Foundation Models Acceptable use policies are legally binding policies that prohibit specific uses of foundation models. Klyman surveys acceptable use policies from 30 developers encompassing 127 specific use restrictions cited in 184 articles. Like Zeng et al. in “AI Risk Categorization Decoded (AIR 2024),” Klyman highlights the inconsistent number and type of restrictions across developers and lack of transparency behind their motivation and enforcement, indicating the need to for developers to create a unified consensus acceptable use policy. The general motivations are to reduce legal and reputational risk. However, standing in the way of developers working to create a unified policy set is the motivation to use restrictions to hinder competition from using proprietary models. Enforcement can also hinder effective use of a foundation model. Acceptable use policies can be categorized into content restrictions (e.g. the top 4: misinformation, harassment, privacy, discrimination) and end use restrictions, e.g. Anthropic’s restriction on “model scraping,” which is someone training their own AI model on prompts and outputs from Anthropic’s model. Another use restriction is scaling up AI-created content distribution such as automated online posting. As with the Zeng et al. articles, Klyman’s article points the way to create a homogeneous acceptable use policy across a diverse AI ecosystem. Steve Omohundro comments: “…the AI labs’ ‘alignment work’ … is all about the AIs rather than their impact on the world. For goodness sake, the Chinese People's Liberation Army has already fine-tuned Meta's Llama 3.1 to promote Chinese military goals! And Meta's response was ‘that's contrary to our acceptable use policy!’" From the article: Without information about how acceptable use policies are enforced, it is not obvious that they are actually being implemented or effective in limiting dangerous uses. Companies are moving quickly to deploy their models and may in practice invest little in establishing and maintaining the trust and safety teams required to enforce their policies to limit risky uses. Safety Methods Benchmark Early and Red Team Often (Executive Summary excerpt) Two leading methods for uncovering AI safety breaches are 1) inexpensive benchmarking against a standardized test suite, such as prompts for large language models, and 2) longer, higher-cost but more informative intensive, interactive testing by human domain experts (“red-teaming”). Barrett et al., from the UC Berkeley Center for Long-Term Cybersecurity, advocate for this two-pronged approach indicated by the article title. They analyze the methods’ potential for eliminating LLM “dual” use, i.e. corrupting LLMs into creating chemical, biological, radiological, nuclear (CBRN) or cyber or other weaponry or attacks, but the methods apply to less dangerous risk testing as well. Essentially Barrett et al. advocate frequent use of benchmarks until a model attains a high safety score, followed by intensive red-teaming to test the model in more depth and yield more accuracy. Their paraphrase of the article title is: Benchmark Early and Often, and Red-Team Often Enough. Against Purposeful Artificial Intelligence Failures A paper that had to be written, and not surprisingly was, by Yampolskiy, who has sought to cover every aspect of AGI risks, is one arguing that intentionally triggering an AI disaster should not be entertained as an option to alert humanity to the danger of AGI. Models That Prove Their Own Correctness Especially in light of Dalrymple et al.’s governance proposal, Toward Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, ‘models that prove their own correctness’ seems especially desirable, if not essential. Dalrymple et al. call for 1) a world model, 2) a safety specification, and 3), a means to verify the safety specification, a highly intriguing proposal, but which falls short of providing an example of such a model or means of verification (we hear that Dalrymple is working on an example). Paradise et al. describe two uses of interactive proof systems (IPS) combined with ML to allow a model to prove its own ‘correctness,’ as specified by the user of the model. The first method requires access to a training set of IPS transcripts (the sequence of interactions between the Verifier and Prover) in which the Verifier accepted the Prover’s probabilistic proof. The second method, Reinforcement Learning from Verifier Feedback (RLVF; note their intentional similarity to Reinforcement Learning from Human Feedback, RLHF) avoids the need for the accepted transcripts (which are in essence an external truth oracle) but only after training on such a verified transcript (its ‘base model’) using transcript learning. From then on it can generate its own emulated verified transcripts. The paper opens the door to other innovative applications of ML to IPS. This is a rather deep paper that requires further analysis to judge the realization of its promise. We look forward to a revised version after its peer review at an unspecified journal. We thank Syed Rafi for the pointer to the paper and Quinn Dougherty for inviting Orr Paradise to his safe AGI reading group. Language-Guided World Models: A Model-Based Approach to AI Control Model-based agents are artificial agents equipped with probabilistic “world models” that are capable of foreseeing the future state of an environment (Deisenroth and Rasmussen, 2011; Schmidhuber, 2015). World models endow these agents with the ability to plan and learn in imagination (i.e., internal simulation)…. Citing Dalrymple et al., Zhang et al. likewise extend the capabilities of world models to increase human control over AI. By adjusting the world model, humans can affect many context-sensitive policies simultaneously. However, for the human-AI interaction to be efficient, the world model must process natural language (NLP); hence, language-guided world models (LWMs). NLP also increases the efficiency of model learning by permitting them to read text. World models increase AI transparency, which NL interaction furthers by allowing humans to query models verbally. As an example, in Sec. 5.3, “Application: Agents that discuss plans with humans,” Zhang et al. describe an agent that uses its LWM to plan a task and then ask a human to review it for safety. Commentary Steve Omohundro, “Progress in Superhuman Theorem Proving?” Our co-founding editor Steve Omohundro is a strong proponent of Provably Safe AI, in which automated theorem-proving will play a major role.[2] Here Steve discusses current developments in using proof to lessen LLM hallucinations, the implications of superhuman theorem-proving for safe AGI and resources for interested readers. On Yampolskiy, “Against Purposeful Artificial Intelligence Failures” Topic Editor Jim Miller, Professor of Economics, Game Theory, and Sociology at Smith College, critiques Roman Yampolskiy’s argument against triggering a deliberate AI failure to wake the world up to AI dangers. Leopold Aschenbrenner, Situational Awareness, “Parting Thoughts” Aschenbrenner dismisses his critics as unrealistic and outlines the core tenets of “AI Realism.” Rowan McGovern, “Unhobbling Is All You Need?” Commentary on Aschenbrenner’s Situational Awareness McGovern questions Aschenbrenner’s fundamental assumption that “unhobbling” alone — “fixing obvious ways in which models are hobbled by default, unlocking latent capabilities and giving them tools, leading to step-changes in usefulness” — will result in his extrapolation of recent AI progress to predict the advent of AGI by 2030. McGovern: “Unhobbling conflates computing power with intelligence.” [1] https://incidentdatabase.ai/. “Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.” [2] Tegmark, M., &amp; Omohundro, S. (2023). Provably safe systems: the only path to controllable AGI. arXiv. Retrieved from https://arxiv.org/abs/2309.01933.

  • Research Article
  • 10.1186/s41239-025-00544-y
How could GenAI work on in-service teachers’ knowledge building process? An empirical study based on epistemic network analysis
  • Aug 12, 2025
  • International Journal of Educational Technology in Higher Education
  • Hui Zhang + 1 more

In-service teacher professional development (TPD) is essential for improving teacher quality and student outcomes. Effective professional development equips teachers to actively engage in problem-solving and meaning construction. However, current online TPD often lacks tailored support, structured analysis, communication, and feedback, limiting teachers’ ability to engage in deep knowledge-building. Generative Artificial Intelligence (GenAI), exemplified by models like ChatGPT, has attracted significant attention for its potential in education, particularly in offering personalized feedback and fostering deep cognitive engagement. This study examines a large language model developed in China to investigate its impact on in-service teachers’ knowledge-building processes. Through analysis of frequency and epistemic network, this study demonstrates that GenAI significantly enhances in-service teachers’ information analysis and critical thinking. It also promotes greater attention to information processing, evaluation, and knowledge transfer during the knowledge-building process, although it performs less effectively in fostering social interaction and collaboration. The study further reveals that GenAI’s impact on knowledge building varies across learning tasks, with its support being particularly significant in higher-order, complex tasks. Building on these findings, the study offers recommendations for professional development for teachers.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.jmathb.2017.08.004
Mediational activities in a dynamic geometry environment and teachers’ specialized content knowledge
  • Sep 12, 2017
  • The Journal of Mathematical Behavior
  • Muteb M Alqahtani + 1 more

Mediational activities in a dynamic geometry environment and teachers’ specialized content knowledge

  • Research Article
  • 10.1080/0020739x.2025.2490104
Generative AI in mathematics education: pre-service teachers’ knowledge and implications for their professional development
  • Apr 23, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Maria Lucia Bernardi + 3 more

Incorporating technologies with Generative Artificial Intelligence (GenAI) into education requires a shift in teaching methodologies. However, little is known about how pre-service teachers perceive the relevance and challenges of this incorporation, particularly in mathematics education. This study investigates pre-service teachers’ interactions with GenAI, addressing the relevance and challenges of integrating it in mathematics teaching and discussing possible implications for their knowledge and professional development. Specifically, it intends to understand: How does the pre-service teachers’ interaction with GenAI during the design and implementation of teaching activities relate to their professional knowledge? And how does this relation impact the relevance they ascribe to GenAI? In this qualitative and interpretative study, involving seven pre-service mathematics teachers, we analyse the interplay between participants’ knowledge and use of a GenAI (in this case, ChatGPT), guided by KTMT – Knowledge for Teaching Mathematics with Technology model. The main conclusions revealed a landscape characterised by promise and challenge, where GenAI can be a valuable educational tool when used to facilitate discussion and promote critical thinking, highlighting the relevance and development of KTMT. The ability to evaluate and reflect on AI-generated responses can promote professional development, preparing pre-service teachers for an increasing presence of technology in educational environments.

  • Research Article
  • 10.1007/s40979-025-00180-z
Secondary school teachers’ perspectives on GenAI proliferation: generating advanced insights
  • Feb 17, 2025
  • International Journal for Educational Integrity
  • Rahul Kumar + 1 more

The proliferation of generative artificial intelligence (GenAI) technologies has significantly impacted the educational sector, prompting a re-evaluation of teaching, learning, and assessment practices. This study explores the perceptions of Ontario secondary school teachers regarding the challenges and opportunities presented by GenAI. Using a qualitative research method, 17 high school teachers were interviewed to understand their views on GenAI integration and its implications for academic integrity. The findings reveal three critical areas for integrating GenAI in education: generating people through professional development and ethical training for educators, generating programs by designing transparent and purpose-driven initiatives, and generating policies through the creation of clear, adaptable governance frameworks. Together, these pillars highlight the collaborative work needed to harness GenAI’s potential while ensuring ethical and equitable practices in secondary education. These themes are a subset of invitational education and highlight the need for comprehensive training for teachers, the development of transparent guidelines and ethical practices, and the establishment of robust policies to support the integration of GenAI in education. The study emphasizes the importance of collaboration among educators, administrators, and other stakeholders to effectively navigate the evolving landscape of GenAI-driven educational environments effectively. By addressing these pillars, academic institutions can harness the transformative potential of GenAI while maintaining the integrity and quality of education. This research provides valuable insights into the evolving role of teachers and the necessity for strategic planning, professional development, and policy frameworks to optimize the benefits of GenAI in secondary education.

  • Research Article
  • 10.29140/tltl.v7n2.102841
Preparing teachers for the algorithmic educational landscape: A critical mapping of generative AI integration in language teacher education
  • Sep 15, 2025
  • Technology in Language Teaching &amp; Learning
  • Kadir Karakaya + 4 more

The increasing integration of generative Artificial Intelligence (AI) tools, such as ChatGPT, in education has prompted growing interest in their pedagogical potential and the emergent competencies required for their effective use in language instruction. While generative AI is beginning to influence language teaching and learning practices, emerging research suggests a growing need to address AI-related literacies and ethical considerations within language teacher education programs. Despite the growing number of studies examining generative AI’s use in language learning contexts, there remains a notable gap in systematically reviewing how generative AI is being addressed in teacher preparation and professional development. To address this gap, this study presents a bibliometric-based systematic literature review of research on generative AI in language teacher education, employing text-mining algorithms, data-mining heuristics, and social network analysis. The findings identify five major thematic clusters in the literature: (1) Professional Development and AI Literacy in Teacher Education, (2) Chatbots and Conversational AI in Language Learning, (3) Generative AI for Instructional Design, Assessment, and Lesson Planning, (4) Generative AI as a Tool for Enhancing EFL Writing Skills, and (5) Exploring Pre-Service Teachers’ Perceptions and Readiness. This review contributes to the growing discourse on AI in education by mapping the current research landscape and identifying critical directions for advancing generative AI integration in language teacher education.

  • Preprint Article
  • 10.31235/osf.io/c5u8r_v2
Global and Educational Disparities in AI Integration: A Study of L2 Teacher Training and Usage Patterns
  • Jun 18, 2025
  • Kristin Davin + 6 more

Generative Artificial Intelligence (GenAI) is reshaping education by introducing tools that enhance teaching methodologies, personalize learning, and streamline administrative tasks. However, adoption of these tools remains uneven, raising concerns about disparities in AI literacy and competency across geographic regions and educational contexts. Here we investigate the adoption of GenAI tools among second language (L2) educators in the United States, Colombia, Germany, and Macau—professionals uniquely positioned to benefit from and highlight barriers to GenAI integration. Using survey data, we assess four areas: accessibility of GenAI tools, teacher knowledge of potential applications, integration in teaching practices, and the nature of professional development provided. Our results indicated substantial intra- and inter-country variance, with U.S. and Colombian educators reporting higher familiarity and usage compared to those from Germany and Macau. Additionally, university and high school teachers were more likely to access professional development and leverage GenAI for tasks like assessment and differentiation than elementary or middle school educators, regardless of geographic setting. These disparities align with broader trends in AI adoption, reflecting heterogeneity in cultural attitudes, systemic barriers, and institutional support. Our findings highlight the critical need for targeted strategies that mitigate these emerging gaps in AI literacy, competency, and professional development.

  • Research Article
  • 10.36096/ijbes.v7i3.831
Evaluation of generative artificial intelligence (GENAI) as a transformative technology for effective and efficient governance, political knowledge, electoral, and democratic processes
  • Jul 15, 2025
  • International Journal of Business Ecosystem &amp; Strategy (2687-2293)
  • Chiji Longinus Ezeji + 1 more

The incorporation of generative artificial intelligence in governance, political knowledge, electoral, and democratic processes is essential as the world transitions to a digital paradigm. Numerous nations have adopted Generative AI (GenAI), a disruptive technology that compels electoral bodies to advocate for the integration of such tools into governance, electoral, and democratic processes. Nevertheless, these technologies do not ensure effortless integration or efficient usage owing to intricate socio-cultural and human dynamics. Certain African jurisdictions are ill-prepared for the adoption of these technologies due to factors including underdevelopment, insufficient electrical supply, lack of technology literacy, reluctance to change, and the goals of governing parties. This study examines generative artificial intelligence as a disruptive technology for enhancing governance, political knowledge, electoral processes, and democracy. A mixed-method approach was employed, incorporating surveys and in-person interviews. The analysis of data, debates, and interpretation of findings were grounded in postdigital theory and theme analysis employing an abductive reasoning technique, in alignment with the tenets of critical realism. The study demonstrated that GENAI can influence political knowledge, election processes, and enhance efficiency in government and democracy. Moreover, GENAI, including ChatGPT, can either exacerbate or mitigate societal tendencies that contribute to human division, facilitate the dissemination of misinformation, perpetuate echo chambers, and undermine social and political trust, as well as polarise disparate groups or sets of viewpoints or beliefs. AI offers substantial opportunities but also poses many obstacles, including technical constraints, ethical dilemmas, and social ramifications. The swift progression of AI may disrupt labour markets by automating tasks conventionally executed by people, resulting in job displacement. Implementing AI necessitates significant upskilling and proficiency with digital tools; therefore, governments and organisations must adequately train their personnel to reconcile the disparity between AI's capabilities and users' comprehension. Additionally, there is a requisite for governmental oversight, regulation, and monitoring of AI adoption and utilisation across all facets of its implementation.

  • Research Article
  • 10.4314/udslj.v20i1.9
Generative Artificial Intelligence-Based Learning Resources for Computing Students in Tanzania Higher Learning Institutions
  • Jul 5, 2025
  • University of Dar es Salaam Library Journal
  • Hadija Mbembati + 1 more

In higher education institutions, students pursuing information and communication technology and other computer-related fields are increasingly using Generative Artificial Intelligence (GenAI) as a learning tool. The GenAI tools, such as Chat Generative Pre-Trained Transformer (ChatGPT), assist students with learning tasks that are always available and on demand. However, the preferred GenAI learning resources in the context of awareness of different GenAI tools, their applicability in various learning tasks, and GenAI usage are unclear. Therefore, this paper investigates the preferred AI-based learning resources for computing students in Higher Learning Institutions (HLIs) using statistical methods, including mean, standard deviation, and cross-tabulations. The survey data were collected from 571 undergraduate students in three Tanzania HLIs through an online questionnaire distributed via Google Forms. The results show that, despite the widespread use of GenAI learning resources, traditional learning resources continue to be employed in the learning process. The preferred learning resources differ depending on the tasks and the year of study. The study findings showed that computing students mostly use GenAI tools, such as ChatGPT and OpenAI, in various learning tasks. The findings offer valuable guidance for educators and policymakers on how to safely implement GenAI-based learning tools that effectively support students' learning needs in this GenAI era.

  • Research Article
  • Cite Count Icon 16
  • 10.1007/s11858-009-0201-9
Introduction: the transformative nature of “dynamic” educational technology
  • Jul 29, 2009
  • ZDM
  • Stephen J Hegedus + 1 more

The presence and intelligent use of digital technologies in mathematics education has awakened an interest in understanding new ways of conceiving mathematics and mathematical cognition. Cognition, especially mathematical cognition, can be understood in terms of the emergence of successive and evolving representational systems (Kaput, 1991). As a consequence, the presence of digital technologies in education calls us to address this fundamental issue that curricular structures eventually will be inhabited by these technologies. It has already happened in the past: the technology of writing and the technology of positional notation of numbers are two of the milestones in the history of semiotic representations with a living impact on education. However, we cannot forget that a school culture always leaves significant marks on students’ and teachers’ values. Artigue (2005, p. 246) states that ‘‘these [previous] values were established, through history, in environments poor in technology, and they have only slowly come to terms with the evolution of mathematical practice linked to technological evolution.’’ Thus, the school culture requires the gradual re-orientation of its practices to gain access to new habits of mind and to the new environments resulting from a serious presence of digital technologies. Consequently, this issue will offer a new perspective with examples from our research to approach the new problematique made tangible by digital technologies. We particularly focus on the role of ‘‘dynamic mathematics’’ as an umbrella description of a certain technology (both software and integrated hardware) that opens up a new exploration space for learners. We present data and analyses of students working with dynamic geometry environments and software that links multiple representations of function in interactive ways across networks. We refer to these more globally as Dynamic Technological Environments or simply DTE(s). Within such environments, students are capable of exhibiting new forms of expressivity associated with their explorations and new forms of understanding based upon the capacity of the environment to react to the actions proposed by the students. This special issue aims to analyze the impact of dynamic mathematics technologies on learning, didactics and curriculum development from multiple perspectives. Particularly, it will examine the differences between the phenomenological aspects of dynamism in the use of technology more generally—contrasting the use of technology as a temporary, engaging activity (the ‘‘field trip’’ syndrome) with more fundamental theoretical and epistemological dimensions of sustained uses of certain technologies. These focus on increasing accessibility of conceptually difficult mathematics through the transformation of mathematical ideas and experiences. A specific contribution of all articles in this special issue is the presentation of the paucity of research in the field from an interdisciplinary perspective, especially with respect to particular mathematical topics, and suggestions for programs of research that might attend to these issues. When we thought about a title for this issue, the word transforming became our main focus of attention. Given the challenges of education globally, we prefer to think about ‘‘transforming’’ the very socio-cultural system within S. J. Hegedus (&) L. Moreno-Armella James J. Kaput Center for Research and Innovation in Mathematics Education, University of Massachusetts Dartmouth, Dartmouth, MA, USA e-mail: shegedus@umassd.edu

  • Research Article
  • Cite Count Icon 11
  • 10.70725/815246mfssgp
Generative AI and Teachers’ Perspectives on Its Implementation in Education
  • Jan 1, 2023
  • Journal of Interactive Learning Research
  • Regina Kaplan-Rakowski + 3 more

While artificial intelligence (AI) has been integral in daily life for decades, the release of open generative AI (GAI) such as ChatGPT has considerably accelerated scholars’ interest in the impact of GAI in education. Both promises and fears of GAI have been becoming apparent. This quantitative study explored teachers' perspectives on GAI and its potential implementation in education. A diverse group of teachers (N = 147) completed a validated survey sharing their views on GAI technology in terms of its use, integration, potential, and concerns. Overall, the teachers express positive perspectives towards GAI regardless of their teaching style. The findings of the study suggest that the more frequently teachers used GAI, the more positive their perspectives became. The teachers believed that GAI could enhance their professional development and could be a valuable tool for students. Although no guarantee exists that teachers’ perspectives translate into actions, previous research shows that technology integration and diffusion is highly dependent on teachers’ initial views (Ismail et al., 2010; Sugar et al., 2004). The findings of this study have implications on how GAI may be integrated in teaching and learning practices.

  • Research Article
  • Cite Count Icon 9
  • 10.11114/jets.v5i9.2556
Mathematics Teachers’ Beliefs about Inquiry-based Learning after a Professional Development Course–An International Study
  • Jul 27, 2017
  • Journal of Education and Training Studies
  • Katja Maass + 2 more

Inquiry-based learning (IBL) is a more student-centered approach to mathematics teaching that is recommended by many policy and curriculum documents across Europe. However, it is not easy for teachers to change from a more teacher-centered way of teaching to inquiry-based teaching as this involves a change of their role in class. Professional development courses are one way to help teachers with this endeavor. Within the discussion of effective professional development, beliefs are often named as an important influencing factor. In this respect, much research has been carried out on how beliefs on mathematics teaching impact the outcomes of the course. However, there has been much less research on what beliefs mathematics teachers develop on inquiry-based learning and how this might impact their (perceived) classroom teaching. Therefore, this paper presents an international research study carried out within the European Project Primas, in which professional development courses on inquiry-based learning were conducted in 12 countries. Using the case-study approach, this paper aims at answering the following questions: 1. What kind of beliefs about IBL do mathematics teachers across Europe develop? 2. How do these beliefs relate to teachers’ perceived enactments of IBL?

  • Research Article
  • Cite Count Icon 2
  • 10.1177/20552076251328807
Digital transformation of nephrology POCUS education-Integrating a multiagent, artificial intelligence, and human collaboration-enhanced curriculum with expert feedback.
  • Mar 1, 2025
  • Digital health
  • Mohammad S Sheikh + 7 more

The digital transformation in medical education is reshaping how clinical skills, such as point-of-care ultrasound (POCUS), are taught. In nephrology fellowship programs, POCUS is essential for enhancing diagnostic accuracy, guiding procedures, and optimizing patient management. To address these evolving demands, we developed an artificial intelligence (AI)-driven POCUS curriculum using a multiagent approach that integrates human expertise with advanced AI models, thereby elevating educational standards and better preparing fellows for contemporary clinical practice. In April 2024, the Mayo Clinic Minnesota Nephrology Fellowship Program initiated a novel AI-assisted process to design a comprehensive POCUS curriculum. This process integrated multiple advanced AI models-including GPT-4.0, Claude 3.0 Opus, Gemini Advanced, and Meta AI with Llama 3-to generate initial drafts and iteratively refine content. A panel of blinded nephrology POCUS experts subsequently reviewed and modified the AI-generated material to ensure both clinical relevance and educational rigor. The curriculum underwent 12 iterative revisions, incorporating feedback from 29 communications across AI models. Key features of the final curriculum included expanded core topics, diversified teaching methods, enhanced assessment tools, and integration into inpatient and outpatient nephrology rotations. The curriculum emphasized quality assurance, POCUS limitations, and essential clinical applications, such as fistula/graft evaluation and software integration. Alignment with certification standards further strengthened its utility. AI models contributed significantly to the curriculum's foundational structure, while human experts provided critical clinical insights. This curriculum, enhanced through a multiagent approach that combines AI and human collaboration, exemplifies the transformative potential of digital tools in nephrology education. The innovative framework seamlessly integrates advanced AI models with expert clinical insights, providing a scalable model for medical curriculum development that is responsive to evolving educational demands. The synergy between technological innovation and human expertise holds promising implications for advancing fellowship training. Future studies should evaluate its impact on clinical competencies and patient outcomes across diverse practice environments.

More from: International Journal of Mathematical Education in Science and Technology
  • New
  • Research Article
  • 10.1080/0020739x.2025.2580930
Jordan canonical form without tears
  • Nov 6, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Zhibin Yan

  • New
  • Front Matter
  • 10.1080/0020739x.2025.2580038
Issue 56-11 Covers and Table of Contents
  • Nov 2, 2025
  • International Journal of Mathematical Education in Science and Technology

  • Research Article
  • 10.1080/0020739x.2025.2574941
Proving the point: Scottish secondary mathematics teachers’ beliefs and aspirations about proof
  • Oct 31, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Paul Argyle Mcdonald

  • Research Article
  • 10.1080/0020739x.2025.2571123
Mathematical communication: conceptions of academics in mathematics and statistics
  • Oct 25, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Oliver Murfett + 1 more

  • Research Article
  • 10.1080/0020739x.2025.2572379
Exploring students’ approaches to function composition from graphs
  • Oct 25, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Yuxi Chen + 3 more

  • Research Article
  • 10.1080/0020739x.2025.2563125
Teaching assistants’ experiences navigating first-year statistics
  • Oct 23, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Peter K Dunn + 1 more

  • Research Article
  • 10.1080/0020739x.2025.2556867
Student-generated explanation in undergraduate mathematics and statistics education: a systematic literature review
  • Oct 23, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Huixin Gao + 2 more

  • Research Article
  • 10.1080/0020739x.2025.2567904
Diagnosing readiness: institutional responses to mathematics transitions in higher education
  • Oct 14, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Alison Reddy + 3 more

  • Research Article
  • 10.1080/0020739x.2025.2563124
Bridging students’ informal discourse and formal discourse on limits
  • Oct 14, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Jungeun Park

  • Research Article
  • 10.1080/0020739x.2025.2560418
Postsecondary mathematics instructors’ characterizations of equitable and inclusive teaching
  • Oct 9, 2025
  • International Journal of Mathematical Education in Science and Technology
  • Dakota White + 5 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon