Enhancing Student Learning and Creativity Through LLM-based PBL Semester Projects

  • Abstract
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

With the advent of Large Language Models (LLMs), there are becoming a larger part of people’s everyday lives – in their work, personal life or learning. Especially for programmers and software developers, learning how to best utilize LLMs as part of their work is becoming a crucial skill. This is especially important to students and educators have duty to prepare them to best tackle all obstacles and best utilize AI as a tool in their programming arsenal. Research into this normally focuses on the use of LLMs as tools for teaching and evaluation. This research takes another approach presenting the results from integrating LLMs as a central concept of project-based learning (PBL) semester projects for students from multiple grades from 5th semester bachelor’s to 10th semester masters. All projects develop interactive systems both traditional and virtual reality and encompass a wide variety of contexts that utilize AI as a central mechanic. We show the attitude of the participating students towards utilizing LLMs, their understanding before and after the projects of AI systems and their overall satisfaction with utilizing relatively new and open technology like LLMs. To our knowledge, this is one of the first such meta-analyses of long-term effects of utilizing LLMs in students’ work. We demonstrate the positive impact of utilizing LLMs on students’ motivation and learning and propose several best practices to avoid some of the pitfalls associated with using these tools.

Similar Papers
  • Research Article
  • 10.1057/s41599-025-04912-x
Does GPT-4 surpass human performance in linguistic pragmatics?
  • Jun 10, 2025
  • Humanities and Social Sciences Communications
  • Ljubiša Bojić + 2 more

As Large Language Models (LLMs) become increasingly integrated into everyday life as general-purpose multimodal AI systems, their capabilities to simulate human understanding are under examination. This study investigates LLMs’ ability to interpret linguistic pragmatics, which involves context and implied meanings. Using Grice’s communication principles, we evaluated both LLMs (GPT-2, GPT-3, GPT-3.5, GPT-4, and Bard) and human subjects (N = 147) on dialogue-based tasks. Human participants included 71 primarily Serbian students and 76 native English speakers from the United States. Findings revealed that LLMs, particularly GPT-4, outperformed humans. GPT-4 achieved the highest score of 4.80, surpassing the best human score of 4.55. Other LLMs performed well: GPT-3.5 scored 4.10, Bard 3.75, and GPT-3 3.25; GPT-2 had the lowest score of 1.05. The average LLM score was 3.39, exceeding the human cohorts’ averages of 2.80 (Serbian students) and 2.34 (U.S. participants). In the ranking of all 155 subjects (including LLMs and humans), GPT-4 secured the top position, while the best human ranked second. These results highlight significant progress in LLMs’ ability to simulate understanding of linguistic pragmatics. Future studies should confirm these findings with more dialogue-based tasks and diverse participants. This research has important implications for advancing general-purpose AI models in various communication-centered tasks, including potential application in humanoid robots in the future.

  • Research Article
  • Cite Count Icon 10
  • 10.9781/ijimai.2024.02.007
Virtual Reality and Language Models, a New Frontier in Learning.
  • Mar 1, 2024
  • International Journal of Interactive Multimedia and Artificial Intelligence
  • Juan Izquierdo Domenech + 2 more

The proposed research introduces an innovative Virtual Reality (VR) and Large Language Model (LLM) architecture to enhance the learning process across diverse educational contexts, ranging from school to industrial settings. everaging the capabilities of LLMs and Retrieval-Augmented Generation (RAG), the architecture centers around an immersive VR application. This application empowers students of all backgrounds to interactively engage with their environment by posing questions and receiving informative responses in text format and with visual hints in VR, thereby fostering a dynamic learning experience. LLMs with RAG act as the backbones of this architecture, facilitating the integration of private or domain-specific data into the learning process. By seamlessly connecting various data sources through data connectors, RAG overcomes the challenge of disparate and siloed information repositories, including APIs, PDFs, SQL databases, and more. The data indexes provided by RAG solutions further streamline this process by structuring the ingested data into formats optimized for consumption by LLMs. An empirical study was conducted to evaluate the effectiveness of this VR and LLM architecture. Twenty participants, divided into Experimental and Control groups, were selected to assess the impact on their learning process. The Experimental group utilized the immersive VR application, which allowed interactive engagement with the educational environment, while the Control group followed traditional learning methods. The study revealed significant improvements in learning outcomes for the Experimental group, demonstrating the potential of integrating VR and LLMs in enhancing comprehension and engagement in learning contexts. This study presents an innovative approach that capitalizes on the synergy between LLMs and immersive VR technology, opening avenues for a transformative learning experience that transcends traditional boundaries and empowers learners across a spectrum of educational landscapes.

  • Research Article
  • Cite Count Icon 1
  • 10.1145/3701194
Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns
  • Jan 10, 2025
  • Proceedings of the ACM on Human-Computer Interaction
  • Buse Carik + 3 more

Despite the increasing use of large language models (LLMs) in everyday life among neurodivergent individuals, our knowledge of how they engage with and perceive LLMs remains limited. In this study, we investigate how neurodivergent individuals interact with LLMs by qualitatively analyzing topically related discussions from 61 neurodivergent communities on Reddit. Our findings reveal 20 specific LLM use cases across five core thematic areas of use among neurodivergent users: emotional well-being, mental health support, interpersonal communication, learning, and professional development and productivity. We also identified key challenges, including overly neurotypical LLM responses and the limitations of text-based interactions. In response to such challenges, some users actively seek advice by sharing input prompts and corresponding LLM responses. Others develop workarounds by experimenting and modifying prompts to be more neurodivergent-friendly. Despite these efforts, users have significant concerns around LLM use, including potential overreliance and fear of replacing human connections. Our analysis highlights the need to make LLMs more inclusive for neurodivergent users and implications around how LLM technologies can reinforce unintended consequences and behaviors.

  • Research Article
  • Cite Count Icon 1
  • 10.1609/aies.v7i1.31741
Decoding Multilingual Moral Preferences: Unveiling LLM's Biases through the Moral Machine Experiment
  • Oct 16, 2024
  • Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
  • Karina Vida + 2 more

Large language models (LLMs) increasingly find their way into the most diverse areas of our everyday lives. They indirectly influence people's decisions or opinions through their daily use. Therefore, understanding how and which moral judgements these LLMs make is crucial. However, morality is not universal and depends on the cultural background. This raises the question of whether these cultural preferences are also reflected in LLMs when prompted in different languages or whether moral decision-making is consistent across different languages. So far, most research has focused on investigating the inherent values of LLMs in English. While a few works conduct multilingual analyses of moral bias in LLMs in a multilingual setting, these analyses do not go beyond atomic actions. To the best of our knowledge, a multilingual analysis of moral bias in dilemmas has not yet been conducted. To address this, our paper builds on the moral machine experiment (MME) to investigate the moral preferences of five LLMs, Falcon, Gemini, Llama, GPT, and MPT, in a multilingual setting and compares them with the preferences collected from humans belonging to different cultures. To accomplish this, we generate 6500 scenarios of the MME and prompt the models in ten languages on which action to take. Our analysis reveals that all LLMs inhibit different moral biases to some degree and that they not only differ from the human preferences but also across multiple languages within the models themselves. Moreover, we find that almost all models, particularly Llama 3, divert greatly from human values and, for instance, prefer saving fewer people over saving more.

  • Research Article
  • Cite Count Icon 32
  • 10.1073/pnas.2317967121
Deception abilities emerged in large language models
  • Jun 4, 2024
  • Proceedings of the National Academy of Sciences
  • Thilo Hagendorff

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs are under suspicion of becoming able to deceive human operators and utilizing this ability to bypass monitoring efforts. As a prerequisite to this, LLMs need to possess a conceptual understanding of deception strategies. This study reveals that such strategies emerged in state-of-the-art LLMs, but were nonexistent in earlier LLMs. We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents, that their performance in complex deception scenarios can be amplified utilizing chain-of-thought reasoning, and that eliciting Machiavellianism in LLMs can trigger misaligned deceptive behavior. GPT-4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time (P < 0.001). In complex second-order deception test scenarios where the aim is to mislead someone who expects to be deceived, GPT-4 resorts to deceptive behavior 71.46% of the time (P < 0.001) when augmented with chain-of-thought reasoning. In sum, revealing hitherto unknown machine behavior in LLMs, our study contributes to the nascent field of machine psychology.

  • Research Article
  • Cite Count Icon 6
  • 10.1089/cyber.2024.0409
Psychomatics-A Multidisciplinary Framework for Understanding Artificial Minds.
  • Aug 29, 2024
  • Cyberpsychology, behavior and social networking
  • Giuseppe Riva + 4 more

Although large language models (LLMs) and other artificial intelligence systems demonstrate cognitive skills similar to humans, such as concept learning and language acquisition, the way they process information fundamentally differs from biological cognition. To better understand these differences, this article introduces Psychomatics, a multidisciplinary framework bridging cognitive science, linguistics, and computer science. It aims to delve deeper into the high-level functioning of LLMs, focusing specifically on how LLMs acquire, learn, remember, and use information to produce their outputs. To achieve this goal, Psychomatics will rely on a comparative methodology, starting from a theory-driven research question-is the process of language development and use different in humans and LLMs?-drawing parallels between LLMs and biological systems. Our analysis shows how LLMs can map and manipulate complex linguistic patterns in their training data. Moreover, LLMs can follow Grice's Cooperative principle to provide relevant and informative responses. However, human cognition draws from multiple sources of meaning, including experiential, emotional, and imaginative facets, which transcend mere language processing and are rooted in our social and developmental trajectories. Moreover, current LLMs lack physical embodiment, reducing their ability to make sense of the intricate interplay between perception, action, and cognition that shapes human understanding and expression. Ultimately, Psychomatics holds the potential to yield transformative insights into the nature of language, cognition, and intelligence, both artificial and biological. Moreover, by drawing parallels between LLMs and human cognitive processes, Psychomatics can inform the development of more robust and human-like artificial intelligence systems.

  • Research Article
  • Cite Count Icon 6
  • 10.1145/3699598
Exploring Automated Assertion Generation via Large Language Models
  • Feb 23, 2025
  • ACM Transactions on Software Engineering and Methodology
  • Quanjun Zhang + 7 more

Unit testing aims to validate the correctness of software system units and has become an essential practice in software development and maintenance. However, it is incredibly time-consuming and labor-intensive for testing experts to write unit test cases manually, including test inputs (i.e., prefixes) and test oracles (i.e., assertions). Very recently, some techniques have been proposed to apply Large Language Models (LLMs) to generate unit assertions and have proven the potential in reducing manual testing efforts. However, there has been no systematic comparison of the effectiveness of these LLMs, and their pros and cons remain unexplored. To bridge this gap, we perform the first extensive study on applying various LLMs to automated assertion generation. The experimental results on two independent datasets show that studied LLMs outperform six state-of-the-art techniques with a prediction accuracy of 51.82%–58.71% and 38.72%–48.19%. The improvements achieve 29.60% and 12.47% on average. Besides, as a representative LLM, CodeT5 consistently outperforms all studied LLMs and all baselines on both datasets, with an average improvement of 13.85% and 26.64%, respectively. We also explore the performance of generated assertions in detecting real-world bugs, and find LLMs are able to detect 32 bugs from Defects4J on average, with an improvement of 52.38% against the most recent approach EditAS . Inspired by the findings, we construct a simplistic retrieval-and-repair-enhanced LLM-based approach by transforming the assertion generation problem into a program repair task for retrieved similar assertions. Surprisingly, such a simplistic approach can further improve the prediction accuracy of LLMs by 9.40% on average, leading to new records on both datasets. Besides, we provide additional discussions from different aspects (e.g., the impact of assertion types and test lengths) to illustrate the capacity and limitations of LLM-based approaches. Finally, we further pinpoint various practical guidelines (e.g., the improvement of multiple candidate assertions) for advanced LLM-based assertion generation in the near future. Overall, our work underscores the promising future of adopting off-the-shelf LLMs to generate accurate and meaningful assertions in real-world test cases and reduce the manual efforts of unit testing experts in practical scenarios.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.53964/jmer.2024019
Reflections on Enhancing Higher Education Classroom Effectiveness Through the Introduction of Large Language Models
  • Nov 12, 2024
  • Journal of Modern Educational Research
  • Xiaoming Zhang + 2 more

Objective: The objective of this study is to explore the potential of integrating Large language models (LLMs) into higher education to enhance teaching effectiveness. It investigates how LLMs can support personalized learning, improve teacher - student interaction, and foster content innovation. The study also addresses the challenges associated with the use of LLMs, including the transformation of the teacherʼs role, data privacy concerns, and technical limitations in handling complex cognitive tasks. Methods: A mixed - method approach was used, combining a literature review, survey, and case study analysis. The literature review focused on artificial intelligence applications in education, while a survey was conducted among 120 professors and 533 students from five universities in China to gather quantitative data on their experiences with AI in education. Case studies were also analyzed to assess the effectiveness of LLM - supported learning platforms in enhancing classroom engagement, interaction, and teaching outcomes. Results: The survey results revealed that 68% of professors and 74% of students found LLMs beneficial for personalized learning and improved classroom engagement. However, 58% of professors raised concerns regarding the changing role of teachers and data privacy issues, while 49% of students worried about over - reliance on AI affecting their independent learning. Case studies showed a 30% improvement in teacher - student interaction and a 25% increase in student engagement, although LLMs struggled with advanced cognitive tasks in specialized fields such as mathematics. Conclusion: The study concludes that while LLMs offer significant advantages in improving personalized learning and enhancing interaction, their integration into higher education must be managed carefully. Teacher training, ethical considerations, and data privacy safeguards are essential. Future research should focus on optimizing LLMs for specialized academic fields and exploring their combination with emerging technologies like virtual reality and augmented reality to create more interactive learning environments.

  • Research Article
  • 10.47363/jaicc/2023(2)442
AI-Powered Code Generation Evaluating the Effectiveness of Large Language Models (LLMs) in Automated Software Development
  • Mar 31, 2023
  • Journal of Artificial Intelligence &amp; Cloud Computing
  • Ravikanth Konda

The rapid evolution of Artificial Intelligence (AI) has brought about significant advancements in multiple domains, including software development. One of the most promising innovations is AI-powered code generation through Large Language Models (LLMs), such as OpenAI’s GPT-3 and GPT-4. These models, having been trained on large amounts of programming data, have the ability to produce human-readable code from natural language inputs, which is a big potential for simplifying and optimizing software development processes. The aim of this paper is to analyze the performance of LLMs in automated software development by testing their performance on a variety of tasks such as code generation, debugging, and optimization of software. The research explores both the strengths and weaknesses that these models have to offer, in terms of some of the most important indicators like code quality, generation time, and maintainability of the code. According to our observation, although LLMs hold immense potential to automate mundane programming tasks and enhance developer productivity, they still struggle to cope with more intricate, domain-specific programming tasks involving a higher level of understanding, for example, designing architectures and top-level decision-making. In spite of such shortcomings, LLMs can tremendously enhance software development processes, particularly for small-scale projects or act as helpers for more senior developers. The paper summarizes by reflecting on the potential for LLMs to transform software development processes in the future, while also the importance of the model's reliability, coding quality, and security to be improved if it is to be made applicable to larger, more crucial uses.

  • Research Article
  • Cite Count Icon 1
  • 10.1145/3725529
Investigating the Role of Cultural Values in Adopting Large Language Models for Software Engineering
  • Mar 21, 2025
  • ACM Transactions on Software Engineering and Methodology
  • Stefano Lambiase + 4 more

As a socio-technical activity, software development involves the close interconnection of people and technology. The integration of Large Language Models (LLMs) into this process exemplifies the socio-technical nature of software development. Although LLMs influence the development process, software development remains fundamentally human-centric, necessitating an investigation of the human factors in this adoption. Thus, with this study we explore the factors influencing the adoption of LLMs in software development, focusing on the role of professionals’ cultural values. Guided by the Unified Theory of Acceptance and Use of Technology (UTAUT2) and Hofstede’s cultural dimensions, we hypothesized that cultural values moderate the relationships within the UTAUT2 framework. Using Partial Least Squares-Structural Equation Modelling and data from 188 software engineers, we found that habit and performance expectancy are the primary drivers of LLM adoption, while cultural values do not significantly moderate this process. These findings suggest that, by highlighting how LLMs can boost performance and efficiency, organizations can encourage their use, no matter the cultural differences. Practical steps include offering training programs to demonstrate LLM benefits, creating a supportive environment for regular use, and continuously tracking and sharing performance improvements from using LLMs.

  • Research Article
  • 10.3390/aerospace12060498
Using Large Language Models for Aerospace Code Generation: Methods, Benchmarks, and Potential Values
  • May 30, 2025
  • Aerospace
  • Rui He + 4 more

In recent years, Large Language Models (LLMs) have witnessed rapid advancements, revolutionizing various domains. Within the realm of software development, code generation technology powered by LLMs has emerged as a prominent research focus. Despite its potential, the application of this technology in the aerospace sector remains in its nascent, exploratory phase. This paper delves into the intricacies of LLM-based code generation methods and explores their potential applications in aerospace contexts. It introduces RepoSpace, the pioneering warehouse-level benchmark test for code generation of spaceborne equipment. Comprising 825 samples from five actual projects, this benchmark offers a more precise evaluation of LLMs’ capabilities in aerospace scenarios. Through extensive evaluations of seven state-of-the-art LLMs on RepoSpace, the study reveals that domain-specific differences significantly impact the code generation performance of LLMs. Existing LLMs exhibit subpar performance in specialized warehouse-level code generation tasks for aerospace, with their performance markedly lower than that of domain tasks. The research further demonstrates that Retrieval Augmented Generation (RAG) technology can effectively enhance LLMs’ code generation capabilities. Additionally, the use of appropriate prompt templates can guide the models to achieve superior results. Moreover, high-quality documentation strings are found to be crucial in improving LLMs’ performance in warehouse-level code generation tasks. This study provides a vital reference for leveraging LLMs for code generation in the aerospace field, thereby fostering technological innovation and progress in this critical domain.

  • Research Article
  • 10.1145/3728951
A Large-Scale Empirical Study on Fine-Tuning Large Language Models for Unit Testing
  • Jun 22, 2025
  • Proceedings of the ACM on Software Engineering
  • Ye Shang + 5 more

Unit testing plays a pivotal role in software development, improving software quality and reliability. However, generating effective test cases manually is time-consuming, prompting interest in unit testing research. Recently, Large Language Models (LLMs) have shown potential in various unit testing tasks, including test generation, assertion generation, and test evolution, but existing studies are limited in scope and lack a systematic evaluation of the effectiveness of LLMs. To bridge this gap, we present a large-scale empirical study on fine-tuning LLMs for unit testing. Our study involves three unit testing tasks, five benchmarks, eight evaluation metrics, and 37 popular LLMs across various architectures and sizes, consuming over 3,000 NVIDIA A100 GPU hours. We focus on three key research questions: (1) the performance of LLMs compared to state-of-the-art methods, (2) the impact of different factors on LLM performance, and (3) the effectiveness of fine-tuning versus prompt engineering. Our findings reveal that LLMs outperform existing state-of-the-art approaches on all three unit testing tasks across nearly all metrics, highlighting the potential of fine-tuning LLMs in unit testing tasks. Furthermore, large-scale, decoder-only models achieve the best results across tasks, while encoder-decoder models perform better under the same parameter scale. Additionally, the comparison of the performance between fine-tuning and prompt engineering approaches reveals the considerable potential capability of the prompt engineering approach in unit testing tasks. We then discuss the concerned issues on the test generation task, including data leakage issues, bug detection capabilities, and metrics comparisons. Finally, we further pinpoint carious practical guidelines for LLM-based approaches to unit testing tasks in the near future. Overall, our work demonstrates the promising future of fine-tuning LLMs on unit testing tasks and reduces the manual efforts of unit testing experts in practical scenarios.

  • Research Article
  • 10.1186/s42400-024-00335-4
LLM4TDG: test-driven generation of large language models based on enhanced constraint reasoning
  • May 15, 2025
  • Cybersecurity
  • Jingqiang Liu + 5 more

With the evolution of modern software development paradigms, component reuse, and low-code approaches have emerged as mainstream in software development. However, developers often lack an in-depth understanding of reused code. The inability of components to operate autonomously leads to insufficient testing of software functionalities and security, further exacerbating the contradiction between the increasing complexity of software architectures and the demand for accurate and efficient software automation testing. This, in turn, increases the frequency of software supply chain security incidents. This paper proposes a test-driven generation framework, LLM4TDG, based on large language models (LLMs). By formally defining the constraint dependency graph and converting it into context constraints, LLMs’ ability to understand natural language descriptions such as test requirements and documents is enhanced. Constraint reasoning and backtracking mechanisms are then used to generate test drivers that satisfy the defined constraints automatically. Using the EvalPlus dataset, we evaluate the comprehensive capabilities of LLM4TDG in test case generation using four general-domain LLMs and five code-generation-domain LLMs. The experimental results indicate that our approach significantly enhances LLMs’ ability to comprehend constraints in testing objectives, achieving a 47.62% increase in constraint understanding across 147 testing tasks. Employing LLM4TDG significantly improves the average pass@k metric of all LLMs by 10.41%. The pass@k metric for CodeQwen-chat has improved by up to 18.66%. The metric surpasses the state-of-the-art GPT-4, with a performance of 92.16% on HUMANEVAL and 87.14% on HUMANEVAL+, which enhances the error correction and functional correctness in test-driven code generation. Meanwhile, Our experiments were conducted on a dataset of Python third-party libraries containing malicious behavior in the context of security testing tasks, validating the effectiveness of our method in real-world applications and its generalization capabilities.

  • Research Article
  • 10.1145/3718739
PATCH: Empowering Large Language Model with Programmer-Intent Guidance and Collaborative-Behavior Simulation for Automatic Bug Fixing
  • Feb 20, 2025
  • ACM Transactions on Software Engineering and Methodology
  • Yuwei Zhang + 7 more

Bug fixing holds significant importance in software development and maintenance. Recent research has made substantial strides in exploring the potential of large language models (LLMs) for automatically resolving software bugs. However, a noticeable gap in existing approaches lies in the oversight of collaborative facets intrinsic to bug resolution, treating the process as a single-stage endeavor. Moreover, most approaches solely take the buggy code snippet as input for LLMs during the patch generation stage. To mitigate the aforementioned limitations, we introduce a novel stage-wise framework named PATCH. Specifically, we first augment the buggy code snippet with corresponding dependence context and intent information to better guide LLMs in generating the correct candidate patches. Additionally, by taking inspiration from bug management practices, we decompose the bug-fixing task into four distinct stages: bug reporting, bug diagnosis, patch generation, and patch verification. These stages are performed interactively by LLMs, aiming to simulate the collaborative behavior of programmers during the resolution of software bugs. By harnessing these collective contributions, PATCH effectively enhances the bug-fixing capability of LLMs. We implement PATCH by employing the powerful dialogue-based LLM ChatGPT. Our evaluation on the widely used bug-fixing benchmark BFP demonstrates that PATCH has achieved better performance than state-of-the-art LLMs.

  • Research Article
  • Cite Count Icon 13
  • 10.1109/tvcg.2024.3413195
Enhancing Data Literacy On-Demand: LLMs as Guides for Novices in Chart Interpretation.
  • Sep 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Kiroong Choe + 6 more

With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.

More from: European Conference on e-Learning
  • Research Article
  • 10.34190/ecel.24.1.4355
The Impact of an AI-Based Educational Platform on Student Teachers’ Self-Regulated Learning
  • Nov 18, 2025
  • European Conference on e-Learning
  • Rahma Al-Sabri + 1 more

  • Research Article
  • 10.34190/ecel.24.1.4318
Enhancing Higher Education Learning through Blackboard: Impact on Student Learning and Diversity
  • Oct 30, 2025
  • European Conference on e-Learning
  • Nathunathi Mvunge + 1 more

  • Research Article
  • 10.34190/ecel.24.1.4253
From the Implications of Open Education for Teachers to the Design of A Self-evaluation Tool for Open-Only Blended Instruction
  • Oct 17, 2025
  • European Conference on e-Learning
  • Lionel Alvarez (-Chevrier) + 4 more

  • Research Article
  • 10.34190/ecel.24.1.4256
Learning Trajectories in Hyper-Hybrid Spaces
  • Oct 17, 2025
  • European Conference on e-Learning
  • Susanne Dau + 1 more

  • Research Article
  • 10.34190/ecel.24.1.3812
Cross-disciplinary Educator Training approaches for Education for Sustainable Development in a Post-digital Perspective
  • Oct 17, 2025
  • European Conference on e-Learning
  • Maja Melballe Jensen + 2 more

  • Research Article
  • 10.34190/ecel.24.1.4160
Review of Influencer Impact on Youth: Media Literacy, Consumer Behavior, and Critical Thinking
  • Oct 17, 2025
  • European Conference on e-Learning
  • Petra Kočková + 1 more

  • Research Article
  • 10.34190/ecel.24.1.3986
Student Performance Prediction Using Virtual Learning Environment (VLE) Interactions
  • Oct 17, 2025
  • European Conference on e-Learning
  • Faathima Fayaza Meeraa Shahibo + 1 more

  • Research Article
  • 10.34190/ecel.24.1.3941
Co-Designing Gamified Learning for Soft Skills: A Participatory Future Workshop
  • Oct 17, 2025
  • European Conference on e-Learning
  • Naghmeh Aghaee + 2 more

  • Research Article
  • 10.34190/ecel.24.1.3925
Understanding and Supporting Student Problem Solving in Mathematics Exams with Artificial Intelligence
  • Oct 17, 2025
  • European Conference on e-Learning
  • Věra Ferdiánová + 1 more

  • Research Article
  • 10.34190/ecel.24.1.4285
Evaluating WEBPOSE, a Posture Feedback System for Oral Presentations
  • Oct 17, 2025
  • European Conference on e-Learning
  • Stefan Hummel + 5 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon