Measuring attitudes toward responsible AI in engineering – development and validation of the RAISE scale
ABSTRACT As Artificial Intelligence (AI) technologies increasingly shape the field of engineering, ethical considerations are becoming essential for fostering responsible innovation. However, a validated instrument to assess attitudes toward Responsible AI, specifically in the engineering domain, is still missing to date. This study presents the development and validation of the Responsible AI Attitudes Specific to Engineering (RAISE) scale. After refining the item pool of a validated test instrument with expert input from the engineering domain, we conducted confirmatory factor analysis on data from 235 engineering students and professionals in Germany. The resulting 15-item scale measures engineers’ self-reported attitudes along three core Responsible AI dimensions: do-no-harm, transparency, and privacy. It demonstrates acceptable model fit, internal consistency, and measurement invariance across demographic groups. The RAISE scale can serve as a diagnostic and evaluative tool in engineering education and training programmes, helping to inform and assess efforts to foster Responsible AI engagement.
- Conference Article
3
- 10.1109/ethics57328.2023.10154947
- May 18, 2023
Background: Questions surrounding the ethics of artificial intelligence (AI) have been debated for decades [1]. However, in recent years there have been multiple initiatives, scholarly reviews, and policy documents to identify and define ethical issues in play [2]. The efforts to bring high-level principles to applicable practice are complex and can be lost in translation [3]. Moreover, a call to be proactive, rather than reactive, stems from a deduction of intentions behind responsible innovation, value-centric design principles, education efforts, and representative data management techniques. Contemporary applications of AI are complex and difficult to explain, edit, and deal with once integrated in a natural system [4] [5]. Therefore, the analysis conducted within this systematic literature review (SLR) will clarify methods to promote and engage practice on the front end of ethical and responsible AI. As such, the research question is explored: How does each helix in the Quintuple Innovation model address responsible and ethical AI technology with anticipatory or proactive approaches? Methods: To conduct this ongoing research, an adaptation of the PRISMA framework and Hess & Fore's 2017 methodological approach guides the SLR [6] [7]. We included journal articles that were written in English and published between 2018-2023. The collected studies aim to examine how academic scholarship approaches to responsible AI within academia, government, industry, civil society, or the natural environment (the Quintuple Helix). The Web of Science, Google Scholar, and PhilPapers databases were used to identify a set of prominent publications in this field: AI & Society, Nature Machine Intelligence, Minds and Machines, IEEE Transactions on Technology and Society, AI and Ethics, Science and Engineering Ethics, and Communications of the ACM. A key limitation of this study is that it cannot gather the entirety of literature written about the topics of proactively promoting ethical AI due to the vast size and definitional complexity of the associated fields. These inclusion criteria allow the researchers to manage the data and draw meaningful insights from the most current thinking that is reflected in the rapid development of AI innovation we see today. Results and discussion: This poster will present preliminary results and the theoretical framework that guided the qualitative coding process. Additionally, this poster will serve as a forum to collect experts' opinions about what they would like to see from this SLR dataset, and how we can incorporate those elements into our coding. As a result, this data will be able to inform future work to investigate multiple gaps in the literature. For instance, U.S. Government work not protected by U.S. copyright this study will result in a theoretical framework that identifies proactive approaches to responsible and sustainable AI aligned with the five sectors for innovation. Inspired from [8], the effects of investments in education, and other sectors, will be mapped as a chain of responsible AI innovation across all innovation sectors. Finally, we can draw informed conclusions about the use and misuse of experts in AI, ethics, education, and policy. By working towards these objectives, we can see how the interdisciplinary field has made (or not made) a collective effort toward promoting responsible AI-filling a gap in the literature that highlights proactive approaches, rather than reactive. In conclusion, this data will inform experts across multiple domains about how to approach and organize a concerted effort to promote ethical and responsible AI in a pragmatic way.
- Research Article
- 10.15226/2474-9257/5/1/00147
- Jan 1, 2020
- Journal of Computer Science Applications and Information Technology
Technology based on artificial intelligence (AI) is a revolutionary force that is changing economies, civilizations, and industries all over the world. AI, which has its roots in computer science and cognitive psychology, is a wide range of tools and methods designed to make robots capable of doing activities that have historically required human intellect. This abstract examines the many facets of artificial intelligence (AI) technology, including its fundamentals, uses, difficulties, and ramifications. Artificial Intelligence (AI) technology comprises several subfields such as robotics, computer vision, natural language processing, machine learning, and expert systems. Particularly, machine learning techniques have propelled incredible progress by allowing computers to learn from data and make judgments or predictions without the need for explicit programming. Natural language processing allows machines to comprehend, interpret, and produce human language, hence facilitating human-computer interaction. Machines can now see, analyze, and interpret visual data from the real world thanks to computer vision technology. Applications of AI technology may be found in a wide range of industries, including manufacturing, healthcare, finance, transportation, agriculture, education, and entertainment. AI-powered solutions help in drug discovery, medical imaging analysis, diagnosis, and customized therapy in the healthcare industry. AI algorithms are used in finance to power automated trading, fraud detection, risk assessment, and customer support. AI makes it possible for transportation to include predictive maintenance, traffic management, and driverless cars. Artificial Intelligence enhances supply chain management, quality assurance, and production processes in manufacturing. AI technology has the potential to revolutionize many industries, but it also comes with dangers and problems. These include privacy concerns, security hazards, ethical dilemmas, issues with prejudice and fairness, and effects on society and employment. Responsible AI methods, legal frameworks, multidisciplinary cooperation, and ethical standards are all necessary to meet these issues. Future prospects for AI technology development include the ability to solve challenging issues, spur creativity, increase productivity, and improve quality of life. But to fully utilize AI, one must take a comprehensive strategy that strikes a balance between the advancement of technology and ethical issues, human values, and social well-being. In summary, artificial intelligence (AI) technology is at the vanguard of innovation, presenting never-before-seen possibilities to transform whole sectors, spur economic expansion, and tackle global issues. AI has the ability to usher in a future of greater human-machine collaboration, innovation, and wealth through the promotion of collaboration, transparency, and ethical stewardship. the Ranking of the Artificial Intelligence using the TOPSIS Method . Interpretable Models is got the first rank whereas is the Ethical AI is having the Lowest rank. Keywords: Explainable AI (XAI), Interpretable Models, Ethical AI ,Responsible AI, Robustness and Adversarial Defense, Continual Learning, Federated Learning, Human-Centric AI, AI Governance and Policy
- Research Article
6
- 10.2139/ssrn.3873097
- Jan 1, 2021
- SSRN Electronic Journal
Artificial Intelligence and Corporate Social Responsibility: Employees’ Key Role in Driving Responsible Artificial Intelligence at Big Tech
- Research Article
41
- 10.1016/j.caeai.2024.100306
- Sep 19, 2024
- Computers and Education: Artificial Intelligence
Navigating the ethical terrain of AI in education: A systematic review on framing responsible human-centered AI practices
- Research Article
2
- 10.31436/iiumlj.v32i1.927
- May 31, 2024
- IIUM Law Journal
As Artificial Intelligence (AI) technologies continue to evolve rapidly, Malaysia faces the imperative of establishing a robust regulatory framework to address legal complexities and ensure responsible AI deployment. This paper examines the current landscape of AI legality in Malaysia, analysing existing laws and regulations governing AI applications across various sectors. It identifies key legal challenges, including issues related to data privacy, algorithmic transparency, liability, and ethical considerations. Emphasising the transition from mere legality to ethical responsibility, the paper advocates for a proactive approach in charting the course for AI regulation. The doctrinal research methodology is used in this paper. This paper will first discuss the use of AI in different sectors in Malaysia and then will highlight the various problems associated with it. This study also discusses newly adopted AI regulations by the EU and China, and also the progress of the USA and the UK on AI regulation. It proposes strategies for enacting a forward-looking regulatory framework that integrates ethical guidelines, promotes transparency, fosters collaboration between stakeholders, and establishes mechanisms for accountability. By navigating this trajectory towards responsible AI regulation, Malaysia can unlock the full potential of AI while upholding ethical standards, protecting individual rights, and mitigating risks associated with AI technologies.
- Research Article
- 10.62492/sefijeea.v3i1.48
- Feb 12, 2026
- SEFI Journal of Engineering Education Advancement
Being a critical enabler of research and development, data-driven systems like Artificial Intelligence (AI) are increasingly relevant to engineers. Due to their generalizability and wide-ranging functionality, they are closely interwoven with social developments. With it comes the responsibility for instilling the right values and the need to gain knowledge of AI and its implications for society. A master’s seminar at RWTH Aachen University trained engineering students on topics in the context of Responsible AI in engineering. To complement perspectives from industry and accreditation boards, we investigated students’ reflection papers on the course to determine the relevance that engineering students give to their education in Responsible AI. We found that prior to the seminar, students lacked knowledge about AI applications in engineering and assumed that technology (including AI) was neutral and unbiased. Yet after the seminar, students reported having corrected these assumptions. They expressed their positive beliefs about the importance of learning about Responsible AI in engineering, insisting that future engineers should consider the sociotechnical context of their work. This paper presents the results of the reflection paper analysis to address why engineering students see learning about Responsible AI, including its sociotechnical context, as relevant for their future careers.
- Book Chapter
- 10.53478/tuba.978-625-6110-04-5.ch27
- Nov 15, 2024
As AI technologies advance, their application in data analysis, predictive modeling, and strategic decision-making becomes increasingly integral to developing innovative solutions to combat the environmental challenges of our time. This narrative explores the pivotal role of responsible AI in the mitigation of the climate crisis, emphasizing the critical need for ethical, equitable, and effective AI implementations within the sphere of international relations. Central to this discourse is the concept of responsible AI or AI ethics, which advocates for the infusion of ethical principles and humanistic values throughout the lifecycle of AI technologies—from their inception and development to their deployment and operational use. The ethos of responsible AI is particularly salient in the context of climate change, where it underpins the operational integrity of AI applications, ensuring they are governed by ethical standards that emphasize data security, privacy, fairness, and environmental stewardship. In the realm of Ethical and Fair Decision- Responsible AI champions the principle of Inclusion of Diverse Perspectives, advocating for the active participation of varied societal segments and expert groups in the AI development process. This inclusivity enriches the decision-making landscape, engendering a more comprehensive and participatory approach to climate action. For AI to exert a substantive and positive impact on the climate crisis, it is imperative that policymakers, developers, and other stakeholders collectively embrace and actualize the principles of responsible AI. The study underscores the transformative potential of artificial intelligence (AI) in addressing the global climate crisis through a strategic examination of national legislation, international environmental treaties, and the analytical prowess of AI in processing climate data. It highlights the imperative of embedding ethical and responsible AI practices within the framework of existing and forthcoming climate-related legal instruments. This compact methodology focuses on dissecting how national laws and international agreements are increasingly integrating AI to bolster climate action, ensuring these innovations for a unified global response to the climate challenge. The ethical framework proffered by responsible AI furnishes a robust foundation for climate-related policymaking, alleviating the cognitive and ethical load on decision-makers. This framework encourages the formulation of policies that are not merely technologically innovative but also socially conscientious and ecologically responsible. In sum, the integration of AI into climate crisis resolution, guided by the principles of responsibility, ethics, and inclusivity, heralds a promising avenue for fostering global environmental sustainability and enhancing international cooperation in the face of one of the most daunting challenges of our time.
- Research Article
- 10.31804/2782-540x-2023-2-1-43-75
- Mar 27, 2023
- Asia, America and Africa history and modernity
The article deals with the problems associated with the socio-cultural specifics of the development of practical principles of "responsible artificial intelligence (AI)". At present, much attention of researchers is attracted by the African continent, stereotypes about which were associated with backwardness and underdevelopment. Currently, African countries, while effectively building their economic strategies, are also using AI technologies. The concept of Umeå University professor Virginia Dignum, which she outlined in one of the sections of the book “Responsible AI in Africa. Problems and Opportunities” (Springer, 2023). The main logic of Professor Dignum's theory is that all the consequences of using AI technology do not depend on AI, but on the socio-cultural characteristics of the socio-techno-anthropological environment in which these technologies are applied. Using Ubuntu philosophy as an example, Virginia Dignum shows how the sociocultural philosophical aspects of "non-Western" philosophy can be applied to the creation of new socioethical principles that emphasize the cultural diversity of modern AI ethics. The position of Virginia Dignum's theory on the sociocultural specifics of the ethics of "responsible AI" seems to be important, which, in our opinion, can be adapted for various sociocultural spaces that, to one degree or another, do not coincide with the Anglo-American paradigm of economic civilization.
- Research Article
32
- 10.1145/3485875
- Oct 25, 2021
- ACM Transactions on Interactive Intelligent Systems
With the rapid advances of Artificial Intelligence (AI) technologies and applications, an increasing concern is on the development and application of responsible AI technologies. Building AI technologies or machine-learning models often requires massive amounts of data, which may include sensitive, user private information to be collected from different sites or countries. Privacy, security, and data governance constraints rule out a brute force process in the acquisition and integration of these data. It is thus a serious challenge to protect user privacy while achieving high-performance models. This article reviews recent progress of federated learning in addressing this challenge in the context of privacy-preserving computing. Federated learning allows global AI models to be trained and used among multiple decentralized data sources with high security and privacy guarantees, as well as sound incentive mechanisms. This article presents the background, motivations, definitions, architectures, and applications of federated learning as a new paradigm for building privacy-preserving, responsible AI ecosystems.
- Research Article
14
- 10.1007/s10676-023-09683-0
- Feb 14, 2023
- Ethics and Information Technology
The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii) take ethical considerations and enhanced performance in military operations into account. A characterization of the debate on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper. We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision support, taking each quadrant of this characterization into account.
- Conference Article
- 10.54941/ahfe1005899
- Jan 1, 2025
The concept of Digital Trust can be utilized to classify and assess the responsible design, implementation and use of artificial intelligence (AI) technologies. Laws, standards, and guidelines are essential as they support the establishment of procedures that promote responsible AI technologies and therefore broad added value, societal acceptance and public confidence in AI. This contribution introduces the ´Digital Trust Radar´, a structured digital repository synthesizing seventy-eight guidelines, standards and laws relevant to establish responsible AI in organizations. Through a systematic approach, these documents were categorized and analyzed based on various criteria including authorship, geographic focus, intended audience, AI application domain, AI type, and governance alignment. The findings reveal significant variability in the scope and thematic focus of AI related laws, guidelines, or standards, emphasizing ethical, legal, and technical considerations. Our categorization scheme provides a comprehensive overview of international approaches to support AI governance for responsible AI and serves as a valuable resource for stakeholders navigating the complexities of AI design, integration and usage.
- Supplementary Content
10
- 10.1108/lhtn-10-2024-0186
- Nov 29, 2024
- Library Hi Tech News
Purpose The purpose of this paper is to introduce the artificial intelligence (AI) Citizenship Framework, a model that equips teachers and school library professionals with the tools to develop AI literacy and citizenship in students. As AI becomes increasingly prevalent, it is essential to prepare students for an AI-driven future. The framework aims to foster foundational knowledge of AI, critical thinking and ethical decision-making, empowering students to engage responsibly with AI technologies. By providing a structured approach to AI literacy, the framework helps educators integrate AI concepts into their lessons, ensuring students develop the skills needed to navigate and contribute to an AI-driven society. Design/methodology/approach This paper presents a theoretical framework, developed from the author’s experience as an information and digital literacy coach and teacher librarian across Asia, the Middle East and Europe. The AI Citizenship Framework was created without following specific empirical methodologies, drawing instead on practical insights and educational needs observed in diverse contexts. It outlines a scope and sequence for integrating AI literacy into school curricula. The framework’s components build on existing pedagogical practices while emphasising critical, ethical and responsible AI engagement. By providing a structure for AI education, it serves as a practical resource for school librarians and educators. Findings While no empirical data was collected for this theoretical paper, the AI Citizenship Framework offers a structured approach for school librarians and educators to introduce and develop AI literacy. It has the potential to influence AI education by fostering critical and ethical awareness among students, empowering them to participate responsibly in an AI-driven world. The framework’s practical application can be expanded beyond school librarians to include classroom teachers, offering a comprehensive model adaptable to various educational settings. Its real-world implementation could enhance students’ readiness to engage with AI technologies, providing long-term benefits for both educational institutions and the broader society. Research limitations/implications One limitation of the AI Citizenship Framework is that it has not yet been empirically validated. Future research could focus on testing its practical effectiveness in real-world settings, offering insights that may inform refinements and adaptations to better support school librarians and educators in fostering AI literacy and AI citizenship. Practical implications The practical implication of the AI Citizenship Framework is its application in educational settings to equip students with AI literacy and responsible citizenship skills. School library professionals and teachers can use the framework to integrate AI concepts into curricula, fostering critical thinking, ethical understanding and informed decision-making about AI technologies. The framework provides ready-to-use curriculum plans, enabling educators to prepare students for an AI-driven world. Its adaptability also allows classroom teachers to lead AI literacy initiatives, making it a versatile tool for embedding AI education across subjects and promoting responsible use and engagement with AI technologies in real-world contexts. Originality/value The originality and value of the AI Citizenship Framework lie in its approach to integrate AI literacy into educational contexts, specifically tailored for teacher librarians and school librarians. To the best of the authors’ knowledge, it is the first framework that comprehensively addresses the need for AI literacy from an ethical, critical and societal perspective, while also promoting active participation and leadership in AI governance. The framework equips educators with practical tools and curriculum plans, fostering responsible AI use and engagement. Its adaptable structure ensures it can be implemented by classroom teachers as well, adding significant value to AI education across disciplines and age groups.
- Research Article
1
- 10.1158/1557-3265.aimachine-pr-04
- Jul 10, 2025
- Clinical Cancer Research
Background: Artificial intelligence (AI) is promising to rapidly transform healthcare by enhancing clinical workflows and improving patient outcomes. However, the integration of AI solutions also carries significant risk of harm due to discriminatory performance and inequitable outcomes across diverse patient populations. Existing frameworks aimed at promoting responsible AI development, such as SPIRIT-AI, CONSORT-AI, and TRIPOD+AI, provide guidelines for clinical trial design but lack concrete recommendations to identify and mitigate bias during clinical integration. Frameworks emphasizing ethical principles like equity, transparency, and accountability, including HEAAL, JustEFAB, and the Normative Framework, similarly fall short of offering detailed operational guidance for real-world AI deployment. Recognizing these gaps, we developed a novel Framework for Responsible AI Deployment in healthcare settings, incorporating structured, actionable steps to identify, mitigate, and monitor biases throughout the AI lifecycle. Methodology: Our framework (https://github.com/pmcdi/responsible-ai) was developed through a multidisciplinary collaborative approach whereby stakeholders with expertise in biostatistics, machine learning, ethics, clinical care, institutional governance, diversity and inclusion, and patient advocates, synthesized insights from existing frameworks and engaged in iterative and structured feedback sessions to ensure practical applicability and robustness. Results: This framework is organized into four distinct stages: (1) Problem Identification and Study Design, emphasizing equity-focused clinical question formulation and ethical compliance; (2) Model Training and Development, addressing biases in retrospective data and ensuring transparent performance evaluations; (3) Silent Deployment and Clinical Evaluation, prospectively validating model fairness and clinical applicability without direct patient impact; and (4) Clinical Deployment and Lifecycle Monitoring, providing continuous oversight of AI systems integrated into clinical workflows, emphasizing patient and clinician education, compliance monitoring, and adaptive maintenance. The framework is accompanied by a supplemental appendix which contextualizes each stage with concrete detail such as recommended methods, pain points to consider, and academic references for exploration. Conclusions: Our framework addresses critical shortcomings in current practices to facilitate ethical and equitable AI deployment in healthcare. We are actively working with researchers at the Princess Margaret Cancer Centre to evaluate its utility across a breadth of clinical AI solutions at all stages of development. This framework can help institutions meet their ethical obligations; ensure AI-driven innovations align with foundational healthcare principles of fairness, safety, and quality; safeguard against harm; and ultimately improve trust in AI-enhanced clinical care. Citation Format: Benjamin Grant, Mattea Welch, Christopher Deutschman, Clare McElcheran, Adam Badzynski, Jennifer A.H. Bell, Andrew Hope, Robert C. Grant, Tran Truong, Kelly Lane, Patti Leake, Divya Sharma, Ian Stedman, Mike Lovas, Jeremy Petch, Muammar Kabir, Alejandro Berlin, James A. Anderson, Benjamin Haibe-Kains. A practical framework for operationalizing responsible and equitable AI in healthcare: Tackling bias, inequity, and implementation challenges [abstract]. In: Proceedings of the AACR Special Conference in Cancer Research: Artificial Intelligence and Machine Learning; 2025 Jul 10-12; Montreal, QC, Canada. Philadelphia (PA): AACR; Clin Cancer Res 2025;31(13_Suppl):Abstract nr PR-04.
- Research Article
- 10.52783/jier.v5i2.3073
- Jun 28, 2025
- Journal of Informatics Education and Research
The rapid proliferation of artificial intelligence (AI) technologies across sectors has intensified the demand for ethical governance and responsible AI development. However, the integration of ethical instruction within AI-related academic programs remains inconsistent. This study investigates the impact of AI ethics education on students’ ethical knowledge, attitudes, and intentions to engage in Responsible AI practices. Utilizing a quantitative, cross-sectional survey design, data were collected from 210 undergraduate and graduate students enrolled in computer science and engineering programs across three universities. The results indicate that exposure to AI ethics education is significantly associated with increased self-reported ethical knowledge and stronger behavioral intentions to practice Responsible AI. Moreover, ethical knowledge emerged as a key mediator in the relationship between education and intention, suggesting that both direct and indirect effects are at play. These findings underscore the critical role of ethics education in cultivating a foundational ethical mindset among emerging AI professionals. The study contributes empirical evidence to ongoing discussions around curriculum design, ethical literacy, and policy frameworks aimed at ensuring the development and deployment of AI technologies in alignment with societal values.
- Research Article
1
- 10.1108/jices-08-2024-0122
- Jan 7, 2025
- Journal of Information, Communication and Ethics in Society
PurposeThe aim of this research is to conduct a systematic review of the literature on responsible artificial intelligence (RAI) practices within the domain of AI-based Credit Scoring (AICS) in banking. This review endeavours to map the existing landscape by identifying the work done so far, delineating the key themes and identifying the focal points of research within this field.Design/methodology/approachA database search of Scopus and Web of Science (last 20 years) resulted in 377 articles. This was further filtered for ABDC listing, and augmented with manual search. This resulted in a final list of 53 articles which was investigated further using the TCCM (Theory, Context, Characteristics and Methodology) review protocol.FindingsThe RAI landscape for credit scoring in the banking industry is multifaceted, encompassing ethical, operational and technological dimensions. The use of artificial intelligence (AI) in banking is widespread, aiming to enhance efficiency and improve customer experience. Based on the findings of the systematic literature review we found that past studies on AICS have revolved around four major themes: (a) Advances in AI technology; (b) Ethical considerations and fairness; (c) Operational challenges and limitations; and (d) Future directions and potential applications. The authors further propose future directions in RAI in credit scoring.Originality/valueEarlier studies have focused on AI in banking, credit scoring in isolation. This review attempts to provide deeper insights, facilitating the development of this key field.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.