THE ETHICS OF BIOMEDICAL RESEARCH IN CONDITIONS OF DEVELOPMENT OF AI TECHNOLOGIES

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Nowadays, AI technologies are an integral part of modern science. The processes of broad implementation of AI technologies into research activities became a challenge to ethical foundations of science functioning as social institution, capable not only to change established characteristics of research processes, but also to cause risks of wide propagation of academic fraud. The AI tools have a number of advantages determining scales increase and expansion of areas of their application in processes of receiving and presenting scientific results. The article outlines directions of AI application in research activities. The various positions regarding capabilities of AI technologies at different stages of scientific search. The key ethical risks and areas of ethical reflection on AI tools of scientific research in the field of bio-medicine are emphasized. The formats of putting and solving problems of ethics of science on external and internal contour, including consideration of specifics of biomedical research involving humans are analyzed. The tasks of reflection on processes of research transformation under AI influence and development of guidelines of proper application of AI technologies are substantiated. The tasks of maintaining quality of scientific research and compliance with established ethical standards of science are considered.

Similar Papers
  • Research Article
  • 10.32983/2222-4459-2020-9-298-304
Адаптивна модель розвитку електронної комерції
  • Jan 1, 2020
  • Business Inform
  • N A Kaluhina

The article is aimed at building up a model of e-commerce development adapted to the realities of our time and peculiarities of the development of infocommunications of the country. The current status of development of infocommunications is analyzed together with related activities of the enterprises, which are using the digital platforms for conduct of business. It is determined that the development of e-commerce is slowed by the inequality of access to modern information and communication technologies and low activity of certain economic actors regarding trade operations through the Internet. An adaptive model of e-commerce development is proposed, which is based on the consideration of the conditions of the external contour of development of enterprise along with the internal contour in the aspect of the existing resource base and potential development opportunities. The model requires an information base of e-commerce development processes, contains methodical principles for analyzing the internal and external contours of e-commerce development, as well as methods of forecasting changes in the external contour and determining the internal potential. The necessary element of the model is determination of the criteria for the possibility of e-commerce development in existing conditions combined with formation of a list of invariant models for different platforms, which would be the most acceptable in the conditions of external contour. The need to take into account existing cyber risks and apply measures to minimize them in the process of e-commerce development is substantiated. It is defined that the development of e-commerce leads to an effect on micro-, meso- and macro-levels, which in the end can lead to emergence of a synergistic effect. The feedback function contained in the model allows to link the stage of determining the results to the formation of invariant models of development, which should be activated in the absence of desirable results according to the selected model of e-commerce development. Further research will be directed to substantiate the criteria that determine the possibility of e-commerce development under the conditions of both internal and external contours.

  • Conference Article
  • 10.21125/edulearn.2019.0082
RECRUITMENT POLICIES IN SPANISH UNIVERSITIES, A CASE STUDY: TEACHING AND RESEARCH QUALITY
  • Jul 1, 2019
  • Bartolomé Pascual-Fuster

Spanish public universities are well known for their recruitment practices, primarily based on endogamy. Usually, individuals develop their academic careers in the university where obtained their Doctor degree. However, the media, and the society, seems to show a consensus against this recruitment policy, as can be seen in articles published in some of the most relevant Spanish newspapers (El Pais 12/9/2016 “La evolucion de la endogamia…”; El Mundo 6/3/2017 “La comunidad de Madrid…. ley que acabe con…fichen a sus propios alumnos”). Usually, universities consider research and teaching activities as their main tasks. However, those activities could be complementary or substitutive. Research activities allow faculty to reach the frontier of knowledge, and therefore to know what is more relevant to teach to students. However, research activities are time-consuming and teachers focused on research might spend less time and effort on teaching activities than those focused on teaching. Spanish universities avoiding the recruitment of their own Doctors usually hire new faculty from high-quality doctorate programs and these candidates focus their effort on research activities. Therefore, if research and teaching activities are substitutive, this recruitment policy could deteriorate teaching quality at these universities. However, previous articles analyzing research and teaching quality found mixed results. The object of this article is to empirically study whether research quality increases and teaching quality deteriorates when universities hire faculty from high-quality doctorate programs who focus on research activities. We analyze the Department of Business Economics of Universitat de les Illes Balears (UIB) from 2009 to 2017. This department changed its recruitment policy more than ten years ago, avoiding the recruitment of their own doctorate students. Open positions are posted in the Spanish Job market of Doctors in Economics and Business. In this market, the main institutions are business schools, and a few public universities, such as Universidad Carlos III de Madrid. Using several control variables, such as age and the specialization area, we analyze whether there are clear differences in teaching quality and research quality indicators depending on whether faculty members obtained their Doctorate degree from UIB. We find no statistically significant differences in terms of teaching quality, and worse research quality indicators for faculty members with UIB Doctorate degrees. Our research contributes to the literature on the relationship between research and teaching quality, providing further evidence on the complementarity of both activities, even when we measure teaching quality only with students evaluations (probably biased by student marks). Relevant difference respect to previous articles is that our analysis is through the recruitment process. When we analyze the direct relationship between research and teaching quality, our results provide some weak evidence of a negative relationship. Furthermore, our contribution is especially relevant to the public debate on the recruitment policies in Spanish universities, providing evidence supporting that forbidding the recruitment of the own doctorate students in order to hire faculty focused on research does not have to deteriorate teaching quality, and indeed may increase research quality.

  • Research Article
  • Cite Count Icon 5
  • 10.4028/www.scientific.net/amr.472-475.2274
Internal Contour Extraction Algorithm Based on Quadratic B-spline for Images of Hot Long Shaft Forgings
  • Feb 1, 2012
  • Advanced Materials Research
  • Zhe Lin Li + 3 more

It is necessary that the internal and external contours are extracted from the image of hot long shaft forgings, while the forgings are measured by CCD measurement method. In the light of the blurry internal edges in the image of the hot forging, a method based on quadratic B-spline curve is employed to extract feature points. In order to remove the pseudo features, a method based on maximum correlation is presented. In accordance with continuity of the internal contours, quadratic B-spline curve is used to fit the internal contours. Experiments show that this algorithm can effectively extract accurate internal contours for images of hot squaring and chamfering forgings. The extracted contours could provide basic data for subsequent 3D reconstruction and geometric measurements.

  • Research Article
  • Cite Count Icon 1
  • 10.36713/epra20968
A REVIEW OF AI-POWERED CREATIVITY: THE INTERSECTION OF AI AND THE ARTS
  • Apr 13, 2025
  • International Journal of Global Economic Light
  • Dinesh Deckker + 1 more

The development of AI has become a disruptive and transformative force that alters all forms of creative expression, including visual art, music, literature, and performance. Artistic content generation through AI technology expands authorship constraints by enabling machines to produce art in various ways, enhancing existing works, and executing simulations. This paper presents a comprehensive assessment of AI implementation across various art fields before examining how AI impacts human creative capabilities and identifying the ethical and conceptual issues that arise from machine-generated artworks. This paper synthesises contemporary research from 2019 to 2025 on AI technology in creative applications. The creative field currently experiences major changes because machine-produced artwork becomes more sophisticated thanks to AI tools, including DALL-E, Stable Diffusion, GPT-4 and AI-generated content. AI technology introduces new artistic territory but raises debates on copyright rules, sparks authorship debates, affects employment conditions and disrupts human-machine professional interactions. The investigation highlights both the benefits and challenges that arise from artificial intelligence applications in creativity. With AI tools, humans can enhance their artistic quality by leveraging AI as a collaborative tool and venture into new artistic domains that surpass human capabilities. Prior regulatory action must be taken urgently to address significant ethical concerns, including copyright conflicts, data bias, and the erosion of artistic authenticity. The review emphasizes the importance of integrating knowledge from artistic disciplines with both legal frameworks and ethical AI regulations to strike a balance between technological progress and artistic quality. Keywords: Artificial Intelligence, Creative Technologies, Digital Art, Human-AI Collaboration, Generative Art, Computational Creativity

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.52214/vib.v7i.8403
Legal Governance of Brain Data Derived from Artificial Intelligence
  • Jun 2, 2021
  • Voices in Bioethics
  • Mahika Ahluwalia

Photo by Josh Riemer on Unsplash
 Introduction
 With the rapid advancements in neurotechnological machinery and improved analytical insights from machine learning in neuroscience, the availability of big brain data has increased tremendously. Neurological health research is done using digitized brain data.[1] There must be adequate data governance to secure the privacy of subjects participating in brain research and treatments. If not properly regulated, the research methods could lead to significant breaches of the subject’s autonomy and privacy. This paper will address the necessity for neuroprotection laws, which effectively govern the use of big brain data to ensure respect for patient privacy and autonomy.
 Background
 Artificial intelligence and machine learning can be integrated with neuroscience big brain data to drive research studies. This integrative technology allows patterns of electrical activity in neurons to be studied in detail.[2]Specifically, it uses a robotic system which can reason, plan, and exhibit biologically intelligent behavior. Machine learning is a method of computer programming where the code can adapt its behavior based on big brain data.[3] The big brain data is the collection of large amounts of information for the purpose of deciphering patterns through computer analysis using machine learning.[4] The information that these technologies provide is extensive enough to allow a researcher to read a patient’s mind. AI and machine learning technologies work by finding the underlying structure of brain data, which is then described by patterns known as latent factors, eventually resulting in an understanding of the brain’s temporal dynamics.[5]
 Through these technologies, researchers are able to decipher how the human brain computes its performances and thoughts. However, due to the extensive and complex nature of the data processed through AI and machine learning, researchers may gain access to personal information a patient may not wish to reveal. From a bioethical lens, tensions arise in the realm of patient autonomy. Patients are not able to control the transmission of data from their brains that is analyzed by researchers. Governing brain data through laws may enhance the extent of patient privacy in the case where brain data is being used through AI technologies.[6] A responsible approach to governing brain data would require a sophisticated legal structure.
 Analysis
 Impact on Patient Autonomy and Privacy 
 In research pertaining to big brain data, the consent forms do not fully cover the vast amounts of information that is collected. According to research, personal data has become the most sought out commodity to provide content to corporations and the web-based service industry. Unfortunately, data leaks that release private information frequently occur.[7] The storage of an individual’s data on technologies accessible on the internet during research studies makes it vulnerable to leaks, jeopardizing an individual’s privacy. These data leaks may cause the patient to be identified easily, as the degree of information provided by AI technologies are personalized and may be decoded through brain fingerprinting methods.[8]
 There has been an extensive growth in the development and use of AI. It is efficient in providing information to radiologists who diagnose various diseases including brain cancer and psychiatric disease, and AI assists in the delivery of telemedicine.[9] However, the ethical pitfall of reduced patient autonomy must be addressed by analyzing current AI technologies and creating more options for patient preference in how the data may be used. For instance, facial recognition technology[10] commonly used in health care produces more information than listed in common consent forms, threatening to undermine informed consent. Facial recognition software collects extensive data and may disclose more information than a person would prefer to provide despite being a useful tool for diagnosing medical and genetic conditions.[11] In addition, people may not be aware that their images are being used to generate more clinical data for other purposes. It is difficult to guarantee the data is anonymized. Consent requirements must include informing people about the complexity of the potential uses of the data; software developers should maximize patient privacy.[12] Furthermore, there is a “human element” in the use of AI technologies as medical providers control the use and the extent to which data is captured or accessed through the AI technologies.[13] People must understand the scope of the technology and have clear communication with the physician or health care provider about how the medical information will be used. 
 Existing Laws for Brain Data Governance 
 A strict system of defined legal responsibilities of medical providers will ensure a higher degree of patient privacy and autonomy when AI technologies and data from machine learning are used. Governing specific algorithmic data is crucial in safeguarding a patient’s privacy and developing a gold standard treatment protocol following the procurement of the information.[14] Certain AI technologies provide more data than others, and legal boundaries should be established to ensure strong performance, quality control, and scope for patient privacy and autonomy. For instance, currently AI technologies are being used in the realm of intensive neurological care. However, there is a significant level of patient uncertainty about how much control patients have over the data’s uses.[15] Calibrated legal and ethical standards will allow important brain data to be securely governed and monitored.
 Once brain signals are recorded and processed from one individual, the data may be merged with other data in Brain Computer Interface Technology (BCI).[16] To ensure a right and ability to retrieve personal data or pull it from the collection, specific regulations for varying types of data are needed.[17] The importance of consent and patient privacy must be considered through giving patients a transparent view of how brain data is governed.[18] The legal system must address discriminatory issues and risks to patients whose data is used in studies. Laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Protection Act (CCPA) can serve as effective models to protect aggregated data. These laws govern consumer information and ensure the compliance when personal data is collected.[19] California voters recently approved expansion of the CCPA to health data. The Washington Privacy Act, which would have provided rights to access, change, and withdraw personal data, failed to pass. Other states should improve privacy as well,[20] although a federal bill would be preferable. Scientists at the Heidelberg Academy of Sciences argue for data security to be governed in a manner that balances patient privacy and autonomy with the commercial interests of researchers.[21] The balance could be achieved through privacy protections like those in the Washington Privacy Act. Although the Health Insurance Portability and Accountability Act (HIPAA) provides an overall framework to deter the likelihood of dangers to patient protection and privacy, more thorough laws are warranted to combat pervasive data transfer and analysis that technology has brought to the health care industry.[22] Breaches of patient privacy under current HIPAA regulations include releasing patient information to a reporter without their consent and sending HIV data to a patient’s employer without consent.[23] HIPAA does not cover information being shared with outside contractors who do not have an agreement with technology companies to keep patient data confidential. HIPAA regulations also do not always address blatant breaches on patient data confidentiality.[24] Patients must be provided with methods to monitor the data being analyzed to be able to view the extent of private information being generated via AI technologies. In health research, the medical purposes of better diagnosis, earlier detection of diseases, or prevention are ethical justifications for the use of the data if it was collected with permission, the person understood and approved the uses of the data, and the data was deidentified.
 A standard governance framework is required in providing the fairest system of care to patients who allow their brain data to be examined. Informed consent in the neuroscience field could reaffirm the privacy and autonomy of patients by ensuring that they understand the type of information collected. Laws also could protect data after a patient’s death. Malpractice in the scope of brain data could give people a cause of action critical in safeguarding patient’s rights. Data breach lawsuits will become common but generally do not cover deidentified data that becomes part of big data collection. A more synchronized approach to the collection and consent process will encourage an understanding of how big data is used to diagnose and treat patients. Some altruistic people may even be more likely to consent if they know the largescale data collection is helpful to treat and diagnose people. Others should have the ability to opt out of sharing neurological data, especially when there is not certainty surrounding deidentification.[25]
 Conclusion
 Artificial intelligence and machine learning technologies have the potential to aid in the diagnosis and treatment of people globally by extracting and aggregating brain data specific to individuals. However, the secure use of the data is necessary to build trust between care providers and patients, as well as in balancing the bioethical principles of beneficence and patient autonomy. We must ensure the highest quality of care to patients, while protecting their privacy, informed consent, and clinical trust. More sophis

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.15802/stp2018/140547
REPEATED CONNECTIONS IN THE SCHEMES OF LINK SLIDER-CRANK MECHANISM OF GRIPPING DEVICE
  • Aug 14, 2018
  • Science and Transport Progress
  • R P Pogrebnyak

Purpose. The article is aimed to carry out a structural analysis of gripping device as a mechanism with a variable structure and external unilateral constraints, as well as to determine the number of repeated connections in the internal and external contours in the mechanism diagram, and to recommend the ways to reduce them. Methodology. Solution of the set problem is realized by means of the mechanisms and machines theory using the universal structural theory of Ozols for analyzing the gripping device as a mechanism with internal and external constraints. Findings. The design of schemes of mechanical gripping devices rarely provides for a stage of structural analysis and synthesis of the mechanism. The preference is given to mandatory kinematic and kinetostatic calculations, layout and design. If structural analysis is carried out, then most often it is limited to calculating the number of the mechanism freedom degrees. The ten-link gripping device is built on the basis of coupled parallelogram slider-crank mechanism with a leading slider. The leading slider acts on the connecting rods connected by the rocker with the frame. The connecting rods bear the clamping elements of the gripping device. The added dyads form a parallelogram and provide a plane-parallel movement of the gripping elements of clamp. Structural analysis was performed using structural schemes for two states of the mechanism: before clamping the object and in the state of the clamped object. The main internal structural parameters of the kinematic scheme: the number of links - 10, the number of connections - 13, the number of contours - 4, the mobility - 1, the number of internal repeated connections - 11. The number of external connections - 12, the actual mobility of the mechanism - 1, the working mobility of the mechanism is - 0, the number of lost mobilities of the external body from the action of external connections - 6, the number of external repeated connections - 5. Originality. Structural analysis of the coupled slider crank mechanism of the gripping device as a mechanism of a variable structure with internal and external connections is carried out for the first time. It is performed contour search, analysis and elimination of useless repeated connections in the internal and external contours of the mechanism. Practical value. Practical recommendations for changing the mobility of kinematic pairs are proposed to reduce the number of repeated connections in internal contours and to provide unloading connection in the outer contour of the mechanism.

  • Research Article
  • Cite Count Icon 6
  • 10.62051/4se95x52
Study on the Impact of Utilizing ChatGPT and Other AI Tools for Feedback in EAP Writing Classrooms on the Discursive Writing Performance of English Major Students
  • Mar 12, 2024
  • Transactions on Social Science, Education and Humanities Research
  • Yaqi Wu

This study aims to delve into the impact of utilizing ChatGPT and other AI tools on the discourse writing performance of English major students, as well as to explore their potential value and future prospects in the field of English education. With the continuous development and application of artificial intelligence technology, AI tools have gradually demonstrated their unique advantages and potential in language learning and writing instruction. However, for English major students, how to effectively utilize these technological tools to enhance their discourse writing abilities remains a subject of considerable interest. In the field of language learning, AI technology has made significant strides in vocabulary acquisition, grammar correction, and discourse generation. ChatGPT, as a natural language generation model, possesses strong language comprehension and generation capabilities and is widely used in dialogue generation, text generation, and other areas. In English education, ChatGPT can provide personalized writing guidance and feedback to students, helping them clarify their thoughts and improve their expression skills. Additionally, other AI tools such as speech recognition technology and text-to-speech technology offer more possibilities for students' English learning and writing. However, despite the numerous advantages demonstrated by AI tools in discourse writing instruction for English major students, their application also faces challenges and limitations. For instance, AI tools still have limitations in understanding academic language and specialized terminology, and they cannot fully replace human professional judgment and language application abilities. Furthermore, the widespread adoption and use of AI tools require addressing technical challenges and adaptation issues in teaching practice to ensure their effective application in education. In conclusion, this study will comprehensively explore the actual effects and potential value of AI tools in discourse writing instruction for English major students through a combination of literature review and empirical research methods, providing new insights and guidance for teaching practice and future research endeavors.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.tele.2024.102187
Generative artificial intelligence usage by researchers at work: Effects of gender, career stage, type of workplace, and perceived barriers
  • Sep 1, 2024
  • Telematics and Informatics
  • Pablo Dorta-González + 3 more

The integration of generative artificial intelligence technology into research environments has become increasingly common in recent years, representing a significant shift in the way researchers approach their work. This paper seeks to explore the factors underlying the frequency of use of generative AI amongst researchers in their professional environments. As survey data may be influenced by a bias towards scientists interested in AI, potentially skewing the results towards the perspectives of these researchers, this study uses a regression model to isolate the impact of specific factors such as gender, career stage, type of workplace, and perceived barriers to using AI technology on the frequency of use of generative AI. It also controls for other relevant variables such as direct involvement in AI research or development, collaboration with AI companies, geographic location, and scientific discipline. Our results show that researchers who face barriers to AI adoption experience an 11 % increase in tool use, while those who cite insufficient training resources experience an 8 % decrease. Female researchers experience a 7 % decrease in AI tool usage compared to men, while advanced career researchers experience a significant 19 % decrease. Researchers associated with government advisory groups are 45 % more likely to use AI tools frequently than those in government roles. Researchers in for-profit companies show an increase of 19 %, while those in medical research institutions and hospitals show an increase of 16 % and 15 %, respectively. This paper contributes to a deeper understanding of the mechanisms driving the use of generative AI tools amongst researchers, with valuable implications for both academia and industry.

  • Research Article
  • Cite Count Icon 3
  • 10.1177/09610006241309323
AI literacy of library and information science students: A study of Bangladesh, India and Pakistan
  • Jan 2, 2025
  • Journal of Librarianship and Information Science
  • Zakir Hossain + 2 more

This study adopted an exploratory approach to investigate the nuances of AI literacy among Library and Information Science (LIS) students in South Asia namely Bangladesh, India and Pakistan. A total of 632 respondents from these countries participated in an online survey that explored their level of AI literacy and familiarity with AI tools and technologies, purposes of using AI tools, ethical perceptions and how AI-related contents were covered in LIS courses and programmes. The study results indicate that students are moderately familiar with AI tools, but the degree of their self-rated AI literacy ranges from basic to advanced. Students reported using AI tools for academic purposes, including information searching, summarising articles, generating ideas and writing academic papers. However, participant LIS students expressed concerns about the ethical usage of AI and Generative AI tools, particularly academic integrity and plagiarism in academic writing. The results underscore the need for more robust AI literacy education in South Asian LIS education programmes – and potentially globally – to deepen students’ understanding and critical engagement with AI tools and technologies. This would better equip them for emerging roles in AI-integrated library services, highlighting a key direction for curriculum development, training methodologies and policy initiatives within LIS education and library and information management profession.

  • Research Article
  • 10.29407/jetar.v10i1.23626
AI Writing Tools on the Content and Organization of Students’ Writing
  • Apr 25, 2025
  • English Education:Journal of English Teaching and Research
  • Nur Aini Mursidha + 3 more

Writing English is a problem for final semester EFL students to complete their thesis. So AI technology becomes a tool that can improve their writing. Researchers conducted this research on three students of English language education at Nahdlatul Ulama Sunan Giri University to find out what AI tools most often use and their perspectives about AI in the content and organization of writing. This research used a qualitative approach through semi-structured interviews. Researchers collected data for this research by conducting pre-observations regarding EFL students who got high writing scores in the previous semester. The first result of this research is the AI tool they used most often was Grammarly, followed by other AI QuillBot, Perplexity AI, DeepL Translator, and Chat GPT. The second result is the respondents gave the positive perspective: AI tools can support, improve and simplify their writing in preparing their theses, and the negative perspective: AI tools must use stable internet data access, usage restrictions and even some AI tools are only intended for premium users. This research can help teachers in the teaching process and development in writing with the help of AI tools. Recommendation for further researchers are to develop this research by using comparison classes and expanding the scope of research and future researchers can develop it by using more research instruments to further strengthen the results.

  • Research Article
  • 10.1093/noajnl/vdz039.007
SS1-KL-1 APPLICATION OF AI TECHNOLOGIES FOR MEDICAL CARE
  • Dec 16, 2019
  • Neuro-Oncology Advances
  • Ryuji Hamamoto

On the basis of progress of the Machine Learning algorithm mainly on the Deep Learning, improvement of the GPU performance, the large-scale public database such as TCGA is available, big attention recently gathers in the AI technology. While large countries such as the United States or China vigorously promote AI research and development by a national policy, Cabinet Office, Government of Japan, also emphasized the importance of AI technologies in the 5th Science and Technology Basic Plan in 2016. As for the AI development, it is wrestled relatively for a long time; the word “Artificial Intelligence” was firstly used in the Dartmouth workshop in 1956. However, the AI development has not been promoted smoothly until now and repeats the active state period and the period of depression. As the current active state period of AI is called as the third AI boom, the most different point of this boom and the other booms is that AI technologies have already been involved in our social life such as the AI-based face authentication device in this period. Indeed, The US Food and Drug Administration (FDA) has already authorized around 30 AI-based medical instruments, and the Pharmaceuticals and Medical Devices Agency (PMDA) in Japan also authorized the first AI-based medical instrument last year. Therefore, now is the important time that we need to consider deeply for the creation of an affluent society, which enables coexistence of human being and AI. In this lecture, I particularly focus on medical imaging analysis using AI technologies and, would like to lecture on an action to the medical care application of the AI technology based on the experience that promoted medical AI research as the leader of two national projects relevant to medical AI called CREST and PRISM, and RIKEN AIP center.

  • Research Article
  • 10.3390/mti9100110
From Consumption to Co-Creation: A Systematic Review of Six Levels of AI-Enhanced Creative Engagement in Education
  • Oct 21, 2025
  • Multimodal Technologies and Interaction
  • Margarida Romero

As AI systems become more integrated into society, the relationship between humans and AI is shifting from simple automation to co-creative collaboration. This evolution is particularly important in education, where human intuition and imagination can combine with AI’s computational power to enable innovative forms of learning and teaching. This study is grounded in the #ppAI6 model, a framework that describes six levels of creative engagement with AI in educational contexts, ranging from passive consumption to active, participatory co-creation of knowledge. The model highlights progression from initial interactions with AI tools to transformative educational experiences that involve deep collaboration between humans and AI. In this study, we explore how educators and learners can engage in deeper, more transformative interactions with AI technologies. The #ppAI6 model categorizes these levels of engagement as follows: level 1 involves passive consumption of AI-generated content, while level 6 represents expansive, participatory co-creation of knowledge. This model provides a lens through which we investigate how educational tools and practices can move beyond basic interactions to foster higher-order creativity. We conducted a systematic literature review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for reporting the levels of creative engagement with AI tools in education. This review synthesizes existing literature on various levels of engagement, such as interactive consumption through Intelligent Tutoring Systems (ITS), and shifts focus to the exploration and design of higher-order forms of creative engagement. The findings highlight varied levels of engagement across both learners and educators. For learners, a total of four studies were found at level 2 (interactive consumption). Two studies were found that looked at level 3 (individual content creation). Four studies focused on collaborative content creation at level 4. No studies were observed at level 5, and only one study was found at level 6. These findings show a lack of development in AI tools for more creative involvement. For teachers, AI tools mainly support levels two and three, facilitating personalized content creation and performance analysis with limited examples of higher-level creative engagement and indicating areas for improvement in supportive collaborative teaching practices. The review found that two studies focused on level 2 (interactive consumption) for teachers. In addition, four studies were identified at level 3 (individual content creation). Only one study was found at level 5 (participatory co-creation), and no studies were found at level 6. In practical terms, the review suggests that educators need professional development focused on building AI literacy, enabling them to recognize and leverage the different levels of creative engagement that AI tools offer.

  • Research Article
  • 10.47772/ijriss.2025.910000022
Assessing Students' Satisfaction with AI Tools in Higher Education
  • Nov 1, 2025
  • International Journal of Research and Innovation in Social Science
  • Siti Haslini Zakaria + 4 more

Nowadays, AI-powered applications are increasingly integrated into academic fields, and numerous studies have discussed the acceptance of this technology among higher education students. With AI worldwide establishment, empirical research remains necessary to evaluate user satisfaction, effectiveness, and long-term sustainability. Since cultural, social, and economic factors influence how AI is implemented in education, the level of acceptance of AI tools among students also may vary from one country to another. This study aims to explore university students' satisfaction with AI tools in the context of higher education in Malaysia, specifically in Kelantan. This study examines how satisfied students are with AI technologies used for their learning, with an emphasis on emotional well-being, content quality, and perceived utility of the tools. Using a cross-sectional approach, 105 undergraduate students from various faculties at Universiti Teknologi MARA (UiTM) Kelantan were selected to participate in the study. Students were given a self-administered questionnaire using Google Forms to obtain the data. Simple random sampling was used in the study, and the data analysis was conducted using Multiple Linear Regression (MLR). The findings showed that the only significant variables influencing students' satisfaction with AI tools in their education are emotional well-being and perceived utility, while the quality of the content is not statistically significant. The findings show that how students feel when using AI tools together with their perception of the tool’s benefit is crucial despite the content itself. The results indicate that, to drive user satisfaction and long-term usage of this technology, developers may prioritize usability, perceived benefits, and emotional engagement rather than solely enhancing algorithmic reliability or refining instructional content.

  • Research Article
  • 10.28925/2311-2409.2025.431
Scientific Ethics and Integrity in the Context of Artificial Intelligence
  • Feb 28, 2025
  • Pedagogical education theory and practice Psychology Pedagogy
  • L Khoruzha

The article examines the issue of the active implementation of generative artificial intelligence in scientific and research activities. It explores the advantages and risks of artificial intelligence in carrying out research tasks by scientists and analyses the attitudes of young researchers toward the use of AI. It is noted that the application of AI tools in scientific research requires a clear definition of new requirements for researchers and their scientific contributions, as well as the adjustment and supplementation of scientific ethics norms. Furthermore, the development of a new institutional policy on academic integrity is necessary at both the state and higher education institution levels. In an era where AI tools such as ChatGPT, Gemini, and Copilot are widely used to process large datasets, generate text, and assist in the design of scholarly publications, new ethical challenges emerge that require reflection and adaptation of existing academic norms. The study analyses attitudes of young researchers toward the use of AI in the preparation of academic texts and investigates the implications of AI-generated content for the principles of academic integrity, such as originality, transparency, and authorship. Methodologically, the research is grounded in both theoretical approaches (analysis, synthesis, classification, generalisation) and empirical data collected through a questionnaire distributed among postgraduate students majoring in Educational Sciences at Borys Grinchenko Kyiv Metropolitan University. Findings reveal that while a significant portion of respondents actively use AI tools for information retrieval, idea generation, and data visualization, they remain cautious about relying on AI for full-text production. Furthermore, concerns were raised regarding the potential for AI to contribute to unintentional plagiarism, data fabrication, and erosion of individual accountability. The article highlights the necessity of revising national and institutional ethical codes to include clear guidelines on responsible AI use in research. It also emphasizes the importance of fostering critical thinking, human oversight, and ethical transparency when incorporating AI technologies. In this context, the concept of academic integrity is redefined not only as adherence to existing values — honesty, trust, fairness, respect, responsibility, and courage — but also as the ability to navigate emerging technologies ethically. The paper concludes by recommending policy updates, educational initiatives, and the creation of regulatory frameworks that address the ethical dimensions of AI-assisted research. The article attempts to develop recommendations and principles for the use of generative AI in education and research activities and highlights the neeneed for the establishment of a legal framework in Ukraine to regulate the use of generative artificial intelligence.

  • Research Article
  • 10.4028/www.scientific.net/amr.562-564.1655
Contour Extraction for Images of High Temperature Long Shaft Heavy Forgings
  • Aug 1, 2012
  • Advanced Materials Research
  • Qin Xia + 3 more

Non-contact measuring method based on CCD camera is desirable for product quality of high temperature long-shaft heavy forgings. In the light of the characteristics of RGB primary color and halation in forging image, the mean red gray value in the high temperature area is proposed as the dynamic threshold to acquire external contours. Internal edges in the image of the hot forging are blurry and discontinuous. For these characteristics, a method based on quadratic B-spline curve is employed to extract and fit the internal contours. Experiments show that this method can effectively remove pseudo features and extract accurate internal and external contours for images of high temperature squaring and chamfering forgings of 900 0C to 1050 0C.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.