Perceptual differences between AI and human compositions: the impact of musical factors and cultural background
The issues of what Artificial Intelligence (AI) can and cannot do in the field of music are among the important topics that both music researchers and AI experts are curious about. This study offers a significant analysis within the context of the growing role of AI technologies in music composition and their impact on creative processes. It contributes to the literature by positioning AI as a complementary tool to the composer’s creativity and by enhancing the understanding of cultural adaptation processes. The study aims to identify the perceptual differences between AI and composer compositions, examine the musical and cultural foundations of these differences, and uncover the factors that influence the listener’s experience. In the research design, a mixed-method approach was adopted, combining qualitative and quantitative research methods. In the quantitative phase, a double-blind experimental design was employed to ensure that participants evaluated composer and AI works impartially. In the qualitative phase, participants’ opinions were gathered. The participants were 10 individuals aged between 19 and 25, with diverse cultural and educational backgrounds; 6 had received formal music education, while 4 were casual listeners. The data collection instruments included a structured interview form and the Assessment Scale for Perceptual Factors in Musical Works. During the research process, each participant evaluated two AI and two composer works in 20-minute standardized listening sessions. All listening sessions were conducted using professional audio equipment. The analysis revealed that composer works scored significantly higher than AI works across all categories (p<.05). Notable differences were observed, particularly in the categories of emotional depth (X composer = 4.6, X AI = 3.1) and memorability (Xcomposer = 4.4, XAI = 3.2). The study concluded that composer works were more effective than AI compositions in terms of emotional depth, structural coherence, and cultural resonance. Additionally, cultural background and music education emerged as significant factors shaping perceptual differences. Future research should broaden the participant pool and incorporate neurocognitive data to facilitate a deeper understanding of perceptual mechanisms. Furthermore, the development of AI systems for use in music should include the integration of Transformer and RNN-based advanced learning models, the implementation of traditional music theory principles, the enhancement of emotional expressiveness, the improvement of cultural adaptation capacities, and the refinement of real-time interaction mechanisms.
- Research Article
- 10.61838/medda.2.1.10
- Jan 1, 2025
- Management, Education and Development in Digital Age
Future schools, by utilizing the capabilities of artificial intelligence, will experience novel approaches in the teaching-learning process, student assessment, and educational management. The aim of this study was to design a model of the future school in which artificial intelligence plays a key role in enhancing learning processes and the management of primary schools. The research method was mixed; the statistical population in the design phase included faculty members from the fields of Computer Engineering - Artificial Intelligence, IT, Information Technology Management Engineering, and Educational Management at higher education institutions, as well as senior managers and experts from the Informatics Department of the Education System and specialists in the fields of "Future Schools" and "Artificial Intelligence in Education." In the validation phase, the population consisted of faculty members of Information Technology Management Engineering and Educational Management at higher education centers in Golestan Province, along with senior managers of the Education Department of this province. In the quantitative phase, the population included all formal and contractual teachers of girls’, boys’, and coeducational primary schools in Golestan Province during the 2023-2024 academic year, totaling 5,919 teachers. In the qualitative phase, 18 experts were selected using the snowball sampling method; in the validation phase, 20 experts were selected using purposive sampling; and in the quantitative phase, 361 teachers were selected using cluster random sampling and Cochran's formula. For data analysis, in the qualitative section, the grounded theory method was employed through open, axial, and selective coding using semi-structured interviews; in the validation phase, the Delphi method was used over three stages with an expert checklist tool and SPSS software; and in the quantitative phase, structural equation modeling was used with a 100-item questionnaire in Smart PLS software. According to the results of the qualitative and validation sections, the paradigmatic model included 12 main categories and 24 subcategories, detailed as follows: Causal conditions (cultural readiness for accepting artificial intelligence, artificial intelligence infrastructure, family-centered interaction with artificial intelligence, intra-institutional participation with artificial intelligence, active learning based on artificial intelligence, and effective learning based on artificial intelligence); Contextual conditions (enhancement of virtual networks with artificial intelligence, social interactions based on artificial intelligence, civic behavior regarding artificial intelligence, teaching-learning policies based on artificial intelligence, and cultural education focused on artificial intelligence); Intervening conditions (institutional conflicts regarding artificial intelligence, technical challenges concerning artificial intelligence, management and supervision in intelligentization, and resource dependency regarding artificial intelligence); Strategy (ease of access to artificial intelligence, assessment based on artificial intelligence, gradual implementation of artificial intelligence, creative learning through artificial intelligence, and content and process changes focused on artificial intelligence); and Outcomes (improvement of speed and quality of education through artificial intelligence, educational equity through artificial intelligence, academic motivation through the application of artificial intelligence, and enhancement of creativity and innovation through the application of artificial intelligence), along with 100 indicators. The results of the quantitative section showed that all dimensions and components of the research model were confirmed.
- Supplementary Content
1
- 10.25904/1912/994
- Jun 28, 2018
- Griffith Research Online (Griffith University, Queensland, Australia)
Investigating First Year Undergraduate EAL Students' Academic Literacy Experiences.
- Research Article
17
- 10.1016/j.gie.2020.10.029
- Nov 2, 2020
- Gastrointestinal Endoscopy
Assessing perspectives on artificial intelligence applications to gastroenterology
- Research Article
1
- 10.1016/j.compbiomed.2025.110468
- Aug 1, 2025
- Computers in biology and medicine
Human-centered artificial intelligence (AI) plays a crucial role in medical research. This paper evaluates the impact of human expertise in AI systems, using dementia prediction as a case study. Specifically, plasma phospho-tau181 (ptau181) is employed as the ground truth for Alzheimer's disease (AD) to advance early detection and treatment strategies. In this empirical study, we investigated three distinct cases to explore AI's role in predicting ptau181 levels through finger-tapping analysis. Case 1 employed explicit features from finger movements combined with a Ridge regression model, emphasizing the interpretability enabled by human-engineered features. Case 2 introduced temporal dynamics using Long Short-Term Memory (LSTM) networks with displacement-vs-time data, highlighting how human experts can integrate temporal dependencies into AI models. Case 3 utilized a 3D Convolutional Neural Network (CNN) to autonomously extract temporal and spatial features from processed video data, showcasing AI's ability to learn and adapt with minimal human intervention. Evaluation demonstrated Case 3's superior performance, illustrating the trade-offs between model effectiveness and human involvement. Case 1 provided interpretability through explicit feature engineering, while Case 2 introduced complexity with temporal dependencies. This study underscores the crucial role of human-centered AI in medical research, particularly in predictive modeling for AD. By combining human expertise with advanced AI capabilities, we can unlock new avenues for early diagnosis and intervention, thereby advancing our understanding and treatment of complex AD conditions to improve patient outcomes.
- Research Article
1
- 10.56536/jbahs.v5i1.111
- Feb 28, 2025
- Journal of Biological and Allied Health Sciences
Artificial Intelligence (AI) is revolutionizing the field of health sciences, reshaping how we teach, learn, and practice medicine. As AI technologies become increasingly integrated into healthcare systems, their impact on health sciences education cannot be overstated. From personalized learning experiences to advanced diagnostic training, AI is poised to enhance the quality and accessibility of education for future healthcare professionals. However, this transformation also raises critical questions about ethics, equity, and the future role of educators in an AI-driven world. The transformative role of Artificial Intelligence (AI) in health sciences education is increasingly recognized as a pivotal factor in shaping the future of medical training and practice. As AI technologies continue to evolve, their integration into educational curricula presents both opportunities and challenges that must be carefully navigated to enhance the learning experience for future healthcare professionals. One of the most significant contributions of AI to health sciences education is its ability to personalize learning. Traditional teaching methods often follow a one-size-fits-all approach, which can leave some students struggling to keep up while others are not sufficiently challenged. AI-powered platforms, such as adaptive learning systems, analyze individual student performance and tailor content to meet their unique needs. For example, tools like Osmosis and AMBOSS use AI to provide customized study plans, ensuring that students focus on areas where they need the most improvement (Topol, 2019). This personalized approach not only improves learning outcomes but also fosters a more inclusive educational environment. AI is also transforming clinical training by simulating real-world scenarios. Virtual patient simulations, powered by AI, allow students to practice diagnosing and treating conditions in a risk-free environment. These simulations can replicate rare or complex cases that students might not encounter during their clinical rotations. For instance, platforms like Touch Surgery and SimX use AI to create immersive surgical and emergency care simulations, providing students with hands-on experience before they enter the operating room (McGaghie et al., 2011). Such tools bridge the gap between theory and practice, preparing students for the complexities of modern healthcare. Moreover, AI is enhancing the role of educators by automating administrative tasks and providing data-driven insights into student performance. Grading, attendance tracking, and even curriculum design can be streamlined using AI, allowing educators to focus on mentoring and engaging with students. AI-driven analytics can also identify at-risk students early, enabling timely interventions to support their academic success (Wartman & Combs, 2018). By augmenting the capabilities of educators, AI empowers them to deliver more impactful and student-centered teaching. AI's potential to revolutionize health sciences education lies in its ability to personalize learning experiences and improve educational outcomes. For instance, AI-driven tools can facilitate realistic simulations and automated assessments, allowing students to engage in practical scenarios that mimic real-world clinical situations (Santos & Lopes, 2024). This capability not only enhances the learning process but also prepares students for the complexities of patient care in a technology-driven environment (Grunhut et al., 2022). Furthermore, the incorporation of AI into curricula can foster critical thinking and decision-making skills, essential for navigating the ethical dilemmas that arise in medical practice (Grunhut et al., 2022). Despite the promising applications of AI in education, the integration of these technologies into medical curricula has been slow. A scoping review highlighted that many medical schools have yet to adopt AI training, primarily due to a lack of systematic evidence supporting its implementation (Lee et al., 2021). Additionally, concerns regarding data protection and the ethical implications of AI use in healthcare education have been raised, indicating a need for comprehensive AI education that addresses these issues (Veras et al., 2023; Frehywot & Vovides, 2023). Students have expressed a desire for more robust training in AI, emphasizing the importance of understanding its role in healthcare delivery and decision-making processes (Ahmad et al., 2023; Derakhshanian et al., 2024). Moreover, the rapid advancement of AI technologies necessitates continuous curriculum updates to keep pace with emerging trends. As noted in recent literature, the integration of AI into biomedical science curricula should include subjects related to informatics, data sciences, and digital health (Sharma et al., 2024). This approach not only equips students with the necessary skills to utilize AI effectively but also prepares them for the evolving landscape of healthcare, where AI will play an integral role in diagnostics, treatment personalization, and patient management (Santos & Lopes, 2024; Secinaro et al., 2021). However, the implementation of AI in health sciences education is not without challenges. Ethical considerations surrounding AI's impact on healthcare equity and the potential for bias in AI algorithms must be addressed (Frehywot & Vovides, 2023; Han et al., 2019). Ensuring that AI technologies are used responsibly and equitably in education and practice is crucial to avoid exacerbating existing disparities in healthcare access and outcomes (Rigby, 2019). Furthermore, the lack of faculty expertise in AI poses a significant barrier to its integration into medical education, highlighting the need for targeted training and resources for educators (Derakhshanian et al., 2024). However, the integration of AI into health sciences education is not without challenges. Ethical concerns, such as data privacy and algorithmic bias, must be addressed to ensure that AI tools are used responsibly. Additionally, there is a risk of over-reliance on AI, potentially undermining the development of critical thinking and clinical judgment skills. Educators must strike a balance between leveraging AI’s capabilities and preserving the human elements of teaching and learning. Equity is another pressing issue. While AI has the potential to democratize education, access to these technologies remains uneven. Institutions in low-resource settings may struggle to adopt AI-driven tools, exacerbating existing disparities in global health education. Policymakers and educators must work together to ensure that the benefits of AI are accessible to all, regardless of geographic or socioeconomic barriers. In conclusion, AI is a powerful tool that holds immense promise for transforming health sciences education. By personalizing learning, enhancing clinical training, and supporting educators, AI can help prepare the next generation of healthcare professionals to meet the demands of an increasingly complex healthcare landscape. However, its integration must be guided by ethical principles and a commitment to equity, However, the successful integration of AI into educational curricula requires a concerted effort to address ethical concerns, update training programs, and equip both students and faculty with the necessary knowledge and skills. As the healthcare landscape continues to evolve, embracing AI in education will be essential for fostering a new generation of healthcare providers who are adept at leveraging technology to improve patient care. As we embrace this technological revolution, we must remember that AI is not a replacement for human expertise but a complement to it. The future of health sciences education lies in the synergy between human ingenuity and artificial intelligence.
- Conference Article
1
- 10.54941/ahfe1004185
- Jan 1, 2023
- AHFE international
The integration of Artificial Intelligence (AI) techniques into various domains has revolutionized numerous industries, and Supply Chain Management (SCM) is no exception. This paper addresses the challenges encountered in SCM and the development of AI solutions within this context. Specifically, we focus on the application of AI in optimizing supply chain planning tasks. This includes forecasting demand, availability and feasibility checks for customer orders, supply chain network design and information flow inside the supply chain planning processes. However, the successful implementation of AI in SCM requires a deep understanding of both the domain-specific challenges and the capabilities and limitations of AI technologies. Thus, this paper proposes an overarching approach that facilitates collaboration between domain experts in SCM and AI experts, enabling them to jointly develop effective solutions.The paper begins by outlining the key challenges faced by SCM professionals, including demand volatility, complexities in inventory management, and dynamic market conditions. Subsequently, it delves into the challenges associated with developing AI solutions for SCM, including data quality, interpretability, and model transparency. To address these challenges, the proposed approach promotes close collaboration and knowledge exchange between SCM and AI experts. By leveraging the domain knowledge and experience of SCM experts, AI experts can better understand the special issues of SCM processes and tailor AI techniques to suit specific needs. In turn, SCM experts can gain insights into the capabilities and limitations of AI, allowing them to make informed decisions regarding the adoption and integration of AI in their supply chain planning operations. Furthermore, the paper discusses the importance of establishing a multidisciplinary team comprising experts from the fields of SCM, AI, and IT. This team-based approach fosters a holistic understanding of SCM challenges and ensures the development of AI solutions that align with business goals and practical constraints.In conclusion, this paper highlights the challenges in combining SCM and AI and proposes a collaborative approach to address these challenges effectively. By leveraging the expertise of both domain and AI experts, organizations can develop tailored AI solutions that enhance supply chain planning, improve decision-making processes, and drive competitive advantage. The proposed approach contributes to the successful integration of AI in SCM, ultimately leading to more efficient and resilient supply chains in the era of artificial intelligence.
- Research Article
1
- 10.47577/technium.v30i.13023
- Jun 27, 2025
- Technium: Romanian Journal of Applied Sciences and Technology
Aim: The study aims to investigate the role of AI in creative processes. It examines whether AI-generated ideas outperform human-generated ideas in terms of originality, quality and preference. The study's goal is to shed information on AI's ability to function as a creative partner in order to enhance and increase human creativity. It focuses on evaluating and comparing AI-generated ideas to human-generated ideas to evaluate whether AI produces better ideas or not.Methodology: A mixed method approach was followed, using both qualitative and quantitative methods, i.e, experimentation and an online survey. The experiment initially comprised two groups of students: AI users and Non-AI users. Both groups were asked to generate ideas regarding a single prompt. These ideas, shuffled and combined into a single list was then presented to a third neutral group unknown to which ideas were AI generated and which were not. This group was then asked to vote for the top 5 ideas. Furthermore, an additional quantitative survey was also taken to further studies regarding AI usage and over-reliance concerns. Findings: The experiment resulted in 52.9% (98) votes were given to AI generated ideas while 47.3% (87) votes were given to ideas generated by humans. Furthermore, the idea that received the most votes was generated by AI. The top 5 most voted concepts have an equal number of AI and Non-AI generated results. Although both received the same top 5 score, it demonstrates that AI can develop creative and original ideas preferred by humans. Whereas the 5 least voted idea list comprised 3 ideas generated by humans and only 2 generated by AI. Additionally, the survey results indicated that approximately 62% of respondents were concerned about generative AI posing long-term risks to human creativity.Novelty & Implications: This study investigates the role of artificial intelligence (AI) in ideation and creative processes, as well as its collaborative interaction with humans. It studies if AI generates greater ideas than humans and makes comparisons between the two. Unlike earlier research, which has focused on AI's applications in academics, businesses, and other sectors, this study focuses on AI's function in creative ideation. The results indicate AI's positive impact, with AI-generated concepts receiving more votes overall. However, respondents also showed concern about the long-term effects of AI overuse on human creativity. The study suggests a balanced collaboration between AI and human minds to maximise creative potential.
- Book Chapter
- 10.4018/979-8-3693-9015-3.ch016
- Feb 21, 2025
This chapter delves into the perceptions of artificial intelligence (AI) experts on the societal implications, governance, and ethical responsibilities associated with AI. Drawing on qualitative research, including interviews with six AI experts, the study investigates three key questions: experts' perceptions of AI's societal impacts, their visions for an AI-governed society, and their views on their responsibilities in addressing AI's consequences. The findings suggest that AI experts emphasize the importance of balancing innovation with education and regulation to ensure AI's responsible development and application, leaving out important questions to analyze further, such as their role in AI political governance, and the interest of wide public participation in the process of AI creation.
- Research Article
- 10.34172/doh.2025.17
- Sep 14, 2025
- Depiction of Health
Today, artificial intelligence (AI)-based research assistants are used in various stages of qualitative studies, including methodology, data collection, group interviews, writing, editing, and qualitative data analysis (1). However, it seems that chatbots can also be used as a data source in human-computer interaction (2). One of the key elements in qualitative research is reaching theoretical saturation, meaning that data collection reaches a stage where no new data is generated, and the researcher considers continuing the interview unnecessary (3). Perhaps at this stage, conversations with chatbots can be used as a complementary or even alternative data source in qualitative study interviews. Obviously, all aspects related to entry and exit criteria, such as the interviewee's previous experiences and cultural backgrounds, which are very important in the interview, must be observed. Perhaps AI can access diverse data from a wide range of sources to produce conceptually rich and relevant data and provide new perspectives. However, research integrity must be respected, but not necessarily in the same way as human studies. For example, we cannot define and identify specific inclusion criteria such as the real work experience of a human in an organization, the years of experience of a patient with a disease in real conditions, or the cultural and ideological backgrounds of the participant in the case of a chatbot. Therefore, interviewing with chatbots does not yield theoretical saturation and may produce incomplete and artificial results. Thus, in addition to the transparency of research and data collection, it is also necessary to define the framework for the ethical and correct use of chatbots instead of humans in interviews. This editorial highlights a new perspective on the use of AI-based chatbots in qualitative research, where the chatbot serves as a data source rather than as an analyst, methodologist, or assistant writer. Although AI provides opportunities for qualitative research, it also faces challenges that reviewers and authors should be aware of until the necessary technology is developed. Some of the opportunities and challenges of using AI chatbots in qualitative research can be the following: The use of AI and recommender systems in qualitative interviews helps reduce the cost and time of research, creates a sense of security, greater comfort for the interviewee, and allows them to express information without worry and bias (4), which helps with the depth of the data. Also, when reaching people who are geographically remote or specific groups that are not easily accessible, AI chatbots trained for specific purposes can be used. However, one must also recognize the challenges ahead and address them with appropriate policies. Among the most important of these is the depth of human feelings and emotions as they may arise in specific situations, which has not yet been defined for the machine. Also, informed consent, maintaining information security, and privacy are serious challenges and ethical issues for chatbots instead of humans (5). Ultimately, chatbots may be subject to a variety of errors, not from human error but from the data available to the AI, language limitations when translating data into the researcher's language, and even in countries like Iran, where access and use of IP from other countries are restricted. These technological challenges are unavoidable. Therefore, journal editors and authors should be cautious when using chatbots for various purposes, including as a substitute or complement to interviews and a source of data collection in qualitative studies.
- Research Article
- 10.5840/pcw2024301/210
- Jan 1, 2024
- Philosophy in the Contemporary World
Since the release of ChatGPT,, philosophers have been increasingly interested in the future of artificial intelligence (AI). Chief among them is whether AI can provide knowledge-generating testimony. Despite these worries, AIs are being thrust into the expert role in various domains like sales, customer service, and even healthcare. Reflecting on these developments, this article advances the idea of what I am calling expert deserts. Similar to a food desert, an expert desert refers to an epistemic environment in which diverse and high-quality expertise is largely inaccessible. As AI continues to occupy more expert roles, expert deserts are apt to become a more prevalent feature of our epistemic environments. As philosophers in the contemporary world, it is our responsibility to remain vigilant of AI’s encroachment on expertise so that we can identify – and hopefully rectify – the ways in which AI has worsened our epistemic positions.
- Research Article
7
- 10.1016/j.apenergy.2023.120988
- Mar 28, 2023
- Applied Energy
Positive climate effects when AR customer support simultaneous trains AI experts for the smart industries of the future
- Research Article
8
- 10.1089/bio.2023.29121.editorial
- Apr 1, 2023
- Biopreservation and Biobanking
Readiness for Artificial Intelligence in Biobanking
- Research Article
2
- 10.3897/biss.8.138147
- Sep 30, 2024
- Biodiversity Information Science and Standards
The United Kingdom's Natural History Museum (NHM) AI Lab Programme represents a pioneering initiative aimed at harnessing the power of artificial intelligence (AI) to bridge the gap between the museum's extensive collection and cutting-edge AI technologies. Despite its immense potential, the application of AI in museum research remains nascent (e.g., He et al. 2024), with some individual research groups pursuing independent projects without cohesive collaboration with AI experts who know or have experience in similar endeavours. Moreover, differing standards in utilising AI among researchers add complexity to the field. The NHM AI Lab Programme addresses these challenges by co-creating AI pilot projects that bring together the NHM's collection, academic researchers, and AI experts. The NHM AI Lab Programme serves as a nexus for interdisciplinary collaboration, offering expertise in AI, machine learning, data science, and software engineering to support NHM researchers. Through one-to-one consultations and collaborative research projects, the NHM AI Lab Programme facilitates the integration of innovative AI-driven technologies into streamlining digitisation workflows and enhancing Earth and Life Science research at the NHM. In less than a year since its inception, our Programme has achieved several milestones, hosting around 20 diverse projects. These include research projects such as the application of AI for the automatic detection and identification of nannofossils in chalk, the classification of ancient shark and dinosaur teeth, the prediction of mammal disease outbreaks, and the extraction of data from historical bird egg records. Additional projects focus on the automation of mineral analysis and the detection of secondary impact craters on planetary surfaces using AI. Some led to journal publications (e.g., He et al. 2024), while others streamlined NHM researchers' workflows, enhancing their processes of research and digitisation. Moreover, several initiatives have paved the way for new funding streams and collaborative ventures, as well as promising commercial prospects. Certain projects have pioneered the creation or transformation of datasets to meet AI-ready standards, such as data quality, consistency, accessibility, usability, and data governance protocols, helping to embed AI practices into NHM research. This AI Lab Programme can act as a model for other institutions addressing a similar challenge of bridging the gap between AI and their research and collections. This presentation provides insights into the establishment and operation of the NHM AI Lab Programme, shares experiences, highlights successful collaborations, discusses challenges encountered, and outlines future directions.
- Research Article
- 10.48175/ijarsct-28020
- Jun 14, 2025
- International Journal of Advanced Research in Science, Communication and Technology
The rapid advancements in artificial intelligence (AI) have impacted various industries, including human resources (HR). This thesis aims to explore the role of AI in HR and its potential implications on organizations and employees. A comprehensive literature review was conducted to identify the various applications of AI in HR, such as recruitment, employee engagement, performance management, and training and development. The study also analyzed the potential benefits and risks associated with the integration of AI in HR, including issues related to bias, privacy, and job displacement. The findings of this study suggest that AI can enhance HR practices by improving efficiency, accuracy, and objectivity. However, the risks associated with AI adoption must be carefully considered and managed to ensure ethical and responsible use. This study provides insights into the current state of AI in HR and its future potential, offering recommendations for organizations and policymakers to maximize the benefits and minimize the risks of AI integration in the HR function. The use of artificial intelligence (AI) in human resources (HR) has become increasingly popular in recent years. AI has the potential to transform HR practices by enabling organizations to automate routine tasks, make more data-driven decisions, and improve the employee experience. However, the use of AI in HR also raises important ethical and legal considerations, such as algorithmic bias and data privacy. This thesis aims to explore the role of AI in HR and its impact on various HR functions, including recruitment and selection, employee engagement, performance management, and training and development. The study also examines the potential risks and challenges of using AI in HR and identifies strategies to mitigate these risks. The research methodology employed in this study is a mixed-methods approach, combining both qualitative and quantitative research methods. The qualitative component involves a literature review and case studies of organizations that have implemented AI in HR. The quantitative component involves a survey of HR professionals to understand their perceptions of AI in HR and their readiness to adopt AI in their organizations. The findings of this study reveal that AI has significant potential to improve HR practices, particularly in recruitment and selection, where it can reduce bias and improve the accuracy and efficiency of the hiring process. AI can also improve employee engagement by providing personalized experiences and feedback, and enhance performance management by enabling real-time monitoring and feedback. In training and development, AI can provide personalized learning experiences that meet the unique needs and preferences of individual employees. However, the study also reveals that the use of AI in HR raises important ethical and legal considerations that must be addressed. Algorithmic bias, data privacy, and the potential for job displacement are some of the key risks and challenges associated with the use of AI in HR. To mitigate these risks, organizations must adopt a proactive approach that involves regular monitoring and evaluation of AI systems, transparency in decision-making processes, and ongoing training and development for HR professionals. The study also identifies several critical success factors for the successful implementation of AI in HR, including strong leadership support, a clear understanding of business objectives, collaboration between HR and IT professionals, and a focus on employee engagement and well- being. Overall, this thesis contributes to the growing body of knowledge on the role of AI in HR and its implications for organizations and HR professionals. By identifying the potential benefits, risks, and challenges of using AI in HR, and providing strategies to mitigate these risks, this study aims to inform organizational decision-making and help HR professionals prepare for the future of work..
- Research Article
1
- 10.1186/s40359-025-03836-0
- Dec 18, 2025
- BMC psychology
The rapid advancements in artificial intelligence (AI) technologies are fundamentally transforming mathematics teaching processes and offering new pedagogical opportunities within instructional environments. However, the effective use of these technologies is closely related to mathematics teachers' levels of knowledge, awareness, attitudes, and skills regarding AI. The purpose of this study is to examine the relationship between mathematics teachers' AI literacy and AI anxiety, to conduct an in-depth analysis of their perceptions regarding the integration of AI into mathematics education, and to evaluate the effects of variables such as watching AI-related films, technology use, and age on this process. This study employed a mixed-methods design. In the quantitative phase, a predictive correlational model was employed, while in the qualitative phase, a case study approach was utilized. Data were collected from 251 mathematics teachers working in various regions of Türkiye. The quantitative data were analyzed using a range of statistical analysis techniques, whereas the qualitative data were evaluated through content analysis. The findings indicate that mathematics teachers' levels of AI literacy are above average, whereas their levels of AI anxiety are below average. A significant and negative relationship was found between AI literacy and AI anxiety. Furthermore, the level of technology use in mathematics instruction was identified as the strongest predictor of both AI literacy and AI anxiety. The results also revealed that mathematics teachers' most prominent anxiety is that the excessive use of AI tools may weaken students' independent thinking and problem-solving skills. In addition, anxiety regarding the potential weakening of the teaching role and the possibility that AI could replace teachers were also noteworthy. Professional development programs should encompass not only the fundamental technological features of AI but also its pedagogical contributions to mathematics instruction. Mathematics teachers should be provided with opportunities to observe how AI supports key instructional processes such as differentiated instruction, formative assessment, and conceptual visualization. Furthermore, training modules should aim to develop teachers' abilities to critically evaluate AI-generated mathematical content in terms of accuracy and pedagogical appropriateness. Through such targeted training, teachers can enhance their AI literacy and create safe and pedagogically meaningful digital learning environments for their students.