The Philosophy of Semar as an Ethical Framework for the Use of Artificial Intelligence

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

<div>Artificial intelligence (AI) is a multidimensional phenomenon that profoundly impacts various aspects of <span style="font-size: 1rem;">modern human life. While global discussions on AI ethics have predominantly centered on Western </span><span style="font-size: 1rem;">perspectives, this study explores ethical AI development through the lens of Semar philosophy, </span><span style="font-size: 1rem;">representing the local wisdom of Nusantara. Employing library research with a philosophical approach, </span><span style="font-size: 1rem;">this research analyzes some sources on Semar philosophy and its relevance to AI ethics. The findings reveal </span><span style="font-size: 1rem;">that: 1) The Ojo Dumeh principle promotes humility in AI use and development, preventing technological </span><span style="font-size: 1rem;">arrogance and misuse. The Eling principle emphasizes awareness of AI’s intended purpose and its socio-environmental consequences, fostering responsible innovation. The Waspada principle highlights the </span><span style="font-size: 1rem;">importance of risk mitigation, addressing challenges such as algorithmic bias, privacy concerns, and </span><span style="font-size: 1rem;">unequal access to technology. 2) The integration of these ethical values presents an opportunity for </span><span style="font-size: 1rem;">strategic collaboration among developers, users, and policymakers to craft regulations that harmonize </span><span style="font-size: 1rem;">global standards with local wisdom, reinforcing the importance of culture-based ethics education. 3) Key </span><span style="font-size: 1rem;">challenges in implementation include limited cultural awareness in AI ethics discourse, resource </span><span style="font-size: 1rem;">constraints, and difficulties in aligning local ethical values with global regulatory frameworks. This study </span><span style="font-size: 1rem;">contributes to the ongoing discourse on AI ethics by introducing a localized ethical framework that balances </span><span style="font-size: 1rem;">technological advancements with cultural values. Further research is recommended to develop a </span><span style="font-size: 1rem;">structured implementation framework and an adaptive strategy for global integration, ensuring that local </span><span style="font-size: 1rem;">philosophical perspectives contribute to a more humane, inclusive, and ethically responsible AI ecosystem</span></div>

Similar Papers
  • Research Article
  • 10.69554/qnar4619
Ethics and privacy in AI regulation: Navigating challenges and strategies for compliance
  • Mar 1, 2025
  • Journal of Data Protection & Privacy
  • Marta Dunphy-Moriel + 1 more

A new summer of artificial intelligence (AI) started a year ago, promising tantalising technical development and efficiencies of scale, while in parallel the Internet is flooded with advice, notes and analysis of AI’s impact and risks. Although the potential use of AI is promising and could help solve very real human challenges, the risks and societal impact are real too. With AI infiltrating all areas of life, such as online platforms, work, healthcare, social services and the justice system, it is essential that it is developed within key safety parameters. Furthermore, it is no secret that for AI to be effective it needs to process vast quantity of data, which is at odds with the General Data Protection Regulation (GDPR) principles of data minimisation. Businesses are repeatedly told to mitigate such risks on fundamental rights, privacy, discrimination, biases, etc. with stringent privacy and AI governance, all within an ethical framework and in compliance with existing legislation. Among the bombardment of information, this paper seeks to provide practical guidelines to comply with existing privacy regulation while implementing safe and trustworthy AI. The first part considers compliance with the GDPR while developing or using AI, while the second part provides practical recommendations in relation to the implementation of an ethical AI framework.

  • Conference Article
  • Cite Count Icon 1
  • 10.1136/spcare-2020-pcc.112
92 Ethical challenges of artificial intelligence technology in palliative care
  • Mar 1, 2020
  • Matthew Cavaciuti + 1 more

Background Artificial Intelligence (AI) is an area of computer science which involves the development of intelligent machines that work and react like humans. AI has potential to improve healthcare delivery through purposeful analysis of clinical record data. Examples of AI use in palliative care includes the analysis of electronic patient record data to predict survival, classify pain severity and to identify important clinical discussions. Despite the opportunities of AI, there are a number of ethical challenges of using this technology in palliative care. Consequently, this study aimed to identify the ethical challenges of AI in palliative care. Methods A narrative scoping review of literature was undertaken to identify the evidence of AI use in palliative care. Three real-world case studies using AI in palliative care were critiqued in depth, using the four ethical principle framework (Autonomy, Justice, Beneficence, Non-maleficence). Ethical challenges were identified and summarised into themes. Results Very few studies have examined the use of AI in palliative care; no studies discuss the ethical challenges as the primary focus. Ethical challenges for AI in palliative care were summarised into four themes: (1) Data privacy and security; (2) Artificial stupidity; (3) Prognostication; (4) Unexpected results and bias. Conclusions AI has potential to support delivery in palliative care; however, a number of important ethical challenges need to be considered. AI healthcare data analysis should be built around an ethical framework. This is important in palliative care as individuals may be more vulnerable compared to other specialities. Research to determine the views and opinions of a patients, caregivers and healthcare professionals is urgently needed. Our work has led to the development of recommendations for ethical AI research in palliative care, which will hopefully guide meaningful use of this technology.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • 10.3390/rel16080948
Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches
  • Jul 22, 2025
  • Religions
  • Wiesław Przygoda + 2 more

Artificial intelligence (AI) is one of the most influential and rapidly developing phenomena of our time. New fields of study are being created at universities, and managers are constantly introducing new AI solutions for business management, marketing, and advertising new products. Unfortunately, AI is also used to promote dangerous political parties and ideologies. The research problem that is the focus of this work is expressed in the following question: How does the symbiotic relationship between artificial and natural intelligence manifest across three dimensions of human experience—philosophical understanding, educational practice, and pastoral care—and what hermeneutical, phenomenological, and critical realist insights can illuminate both the promises and perils of this emerging co-evolution? In order to address this issue, an interdisciplinary research team was established. This team comprised a philosopher, an educator, and a pastoral theologian. This study is grounded in a critical–hermeneutic meta-analysis of the existing literature, ecclesial documents, and empirical investigations on AI. The results of scientific research allow for a broader insight into the impact of AI on humans and on personal relationships in Christian communities. The authors are concerned not only with providing an in-depth understanding of the issue but also with taking into account the ecumenical perspective of religious, social, and cultural education of contemporary Christians. Our analysis reveals that cultivating a healthy symbiosis between artificial and natural intelligence requires specific competencies and ethical frameworks. We therefore conclude with practical recommendations for Christian formation that neither uncritically embrace nor fearfully reject AI, but rather foster wise discernment for navigating this unprecedented co-evolutionary moment in human history.

  • Research Article
  • Cite Count Icon 2
  • 10.33423/jhetp.v25i2.7678
Unintended Consequences of Artificial Intelligence (AI): Skynet, the Terminator, and Extinction?
  • Jun 10, 2025
  • Journal of Higher Education Theory and Practice
  • Biff Baker

Recent scholarship and expert commentary emphasize the transformative yet precarious role of artificial intelligence (AI) in education. Studies highlight AI’s potential to personalize learning, enhance engagement, and optimize institutional operations, while underscoring the importance of ethical design, student motivation, and faculty readiness. Frameworks integrating AI into curricula stress the need for digital literacy, inclusive governance, and responsible innovation. However, risks—from academic dishonesty to existential threats posed by Artificial General Intelligence (AGI)—require urgent attention. Eric Schmidt’s warning about AI’s unpredictable autonomy, particularly in military systems, echoes calls for global safety standards, oversight, and a moratorium on large training runs. A comprehensive, multidimensional approach—including international cooperation, ethical frameworks, and public engagement—is essential to mitigate AGI risks. As AI evolves, educational institutions must balance innovation with accountability, ensuring that AI enhances learning and aligns with societal values and safeguards against catastrophic outcomes. Human oversight remains paramount in this emerging landscape. The pivotal question is not “how” to use AI, but whether it should be used at all!

  • Research Article
  • Cite Count Icon 5
  • 10.2139/ssrn.3198581
Artificial Intelligence: A Game Changer for the World of Work
  • Jan 1, 2018
  • SSRN Electronic Journal
  • Aida Ponce

‘Whoever becomes the ruler of AI will become the ruler of the world,’ said Vladimir Putin in September 2017. The USA, Russia and China are all adamant that artificial intelligence (AI) will be the key technology underpinning their national power in the future. What place, then, is there for Europe in this context? The European Commission has recently set out a European initiative on AI which focuses on boosting the EU's technological and industrial capacity, developing an innovation ecosystem, ensuring the establishment of an appropriate legal and ethical framework, and preparing for socio-economic changes. This edition of the Foresight Brief presents the results of a mapping exercise on AI’s impact on the world of work. It looks at the issues of work organisation and infrastructure, introduces the idea of ‘AI literacy’ for the workforce (as a necessary complement to technical reskilling), and details several AI risks for companies and workers. It also looks at aspects related to algorithmic decision making and the necessary establishment of an ethical and legal framework.

  • Research Article
  • 10.47772/ijriss.2025.92900008
Empowering Framework Sunnah Values: An Ethical Framework for Research in the Age of Artificial Intelligence (AI)
  • Dec 4, 2025
  • International Journal of Research and Innovation in Social Science
  • Mohd Aizul Yaakob + 6 more

The era of Artificial Intelligence (AI) has brought profound transformations to the landscape of modern research by driving productivity, innovation, and the generation of new knowledge. AI has been widely applied across various domains, including healthcare, education, information technology, and the social sciences. However, these advancements also raise serious ethical concerns, such as the risks of plagiarism, data manipulation, algorithmic bias, and privacy intrusion. In this regard, the present study aims to develop an ethical research framework grounded in the values of the Sunnah to guide researchers in addressing the challenges of scholarly inquiry in the age of AI. This study employs a qualitative approach through document analysis, drawing on two primary sources: religious texts, such as the Prophetic traditions (hadith) of Prophet Muhammad (peace be upon him) concerning honesty, trustworthiness, itqan (diligence), ihsan (excellence), and tabayyun (verification) as well as academic writings that examine Islamic ethics and research integrity. The data were thematically analyzed to identify how the values of the Sunnah can be applied as ethical guidelines in AI-driven research. The findings indicate that the incorporation of Sunnah-based values such as transparency, verification (tabayyun), diligence (itqan), and ihsan plays a pivotal role in maintaining academic integrity, enhancing research quality, and preventing the misuse of AI technologies. This study underscores that an ethical framework informed by the Sunnah is not only relevant in regulating research practices but also vital in ensuring that AI development serves the benefit of the ummah and aligns with the principles of universal well-being.

  • Preprint Article
  • 10.31234/osf.io/rvsxk_v2
Towards artificial general intelligence by reverse-engineering the human (heart-)mind
  • Feb 12, 2025
  • Victoria Klimaj + 1 more

In this final set of explorations/meditations (of three), we examine the requirements for developing artificial general intelligence (AGI) through the lens of human cognitive architecture, with particular emphasis on the role of narrative selfhood and social cognition. Drawing on perspectives from cognitive science, philosophy of mind, and artificial intelligence research, we critically evaluate current claims about the capabilities of large language models, particularly regarding their purported achievements of theory of mind and self-awareness. We argue that genuinely human-like artificial intelligence may require more than sophisticated pattern recognition and language modeling, potentially necessitating the development of coherent narrative self-models and rich causal understanding. Special attention is given to the relationship between consciousness, conscience, and trustworthy AI systems, suggesting that meaningful artificial intelligence may require forms of richly-embodied and socially-embedded development to achieve robust and reliable functionality. We conclude by proposing that the path to artificial general intelligence may require recapitulating aspects of human cognitive development, particularly regarding the construction of narrative identity and social-moral reasoning capabilities. This analysis has implications for both the technical development of AI systems and the ethical frameworks through which we evaluate artificial minds.

  • Preprint Article
  • 10.31234/osf.io/rvsxk_v1
Towards artificial general intelligence by reverse-engineering the human (heart-)mind
  • Feb 12, 2025
  • Victoria Klimaj + 1 more

In this final set of explorations/meditations (of three), we examine the requirements for developing artificial general intelligence (AGI) through the lens of human cognitive architecture, with particular emphasis on the role of narrative selfhood and social cognition. Drawing on perspectives from cognitive science, philosophy of mind, and artificial intelligence research, we critically evaluate current claims about the capabilities of large language models, particularly regarding their purported achievements of theory of mind and self-awareness. We argue that genuinely human-like artificial intelligence may require more than sophisticated pattern recognition and language modeling, potentially necessitating the development of coherent narrative self-models and rich causal understanding. Special attention is given to the relationship between consciousness, conscience, and trustworthy AI systems, suggesting that meaningful artificial intelligence may require forms of richly-embodied and socially-embedded development to achieve robust and reliable functionality. We conclude by proposing that the path to artificial general intelligence may require recapitulating aspects of human cognitive development, particularly regarding the construction of narrative identity and social-moral reasoning capabilities. This analysis has implications for both the technical development of AI systems and the ethical frameworks through which we evaluate artificial minds.

  • Research Article
  • 10.33790/jiti1100112
Artificial Intelligence in Higher Education: Ethical Challenges, Governance Frameworks, and Student-Centered Pathways
  • Jan 1, 2025
  • Journal of Information Technology and Integrity
  • Marc P Knox + 1 more

Artificial intelligence (AI) is rapidly changing the higher education sector, as it is radically changing the pedagogical processes, learning experience, and administrative procedures. Even though these inventions bring a new wave of personalization and efficiency in operations, as well as predictability, they also raise serious ethical issues related to bias, fairness, governance, and surveillance. The review is a synthesis of AI studies in higher education that summarizes literature on the subject matter using foundational literature, normative models, case studies, and technological advances. The information describes the opportunities and threats of the adoption of AI and stresses the role of participatory design, systems of governance, and, most basic, continuous ethical supervision. This is a protective mechanism against the integrity and equity of the AI systems that will act as a guiding comfort to the education sector in the future, and hence establish hope among stakeholders. Keywords: Accountability, Algorithmic Governance, Artificial Intelligence (AI), Ethical Dilemmas, Ethical Frameworks, Ethics, Fairness, Frameworks, Governance, Higher Education, Institutional Governance, Student Trust, Surveillance, Technological Advancement, Technological Perspective, Widespread Datafication

  • Research Article
  • Cite Count Icon 4
  • 10.12688/wellcomeopenres.23021.1
The Ubuntu Way: Ensuring Ethical AI Integration in Health Research.
  • Oct 28, 2024
  • Wellcome open research
  • Brenda Odero + 2 more

The integration of artificial intelligence (AI) in health research has grown rapidly, particularly in African nations, which have also been developing data protection laws and AI strategies. However, the ethical frameworks governing AI use in health research are often based on Western philosophies, focusing on individualism, and may not fully address the unique challenges and cultural contexts of African communities. This paper advocates for the incorporation of African philosophies, specifically Ubuntu, into AI health research ethics frameworks to better align with African values and contexts. This study explores the concept of Ubuntu, a philosophy that emphasises communalism, interconnectedness, and collective well-being, and its application to AI health research ethics. By analysing existing global AI ethics frameworks and contrasting them with the Ubuntu philosophy, a new ethics framework is proposed that integrates these perspectives. The framework is designed to address ethical challenges at individual, community, national, and environmental levels, with a particular focus on the African context. The proposed framework highlights four key principles derived from Ubuntu: communalism and openness, harmony and support, research prioritisation and community empowerment, and community-oriented decision-making. These principles are aligned with global ethical standards such as justice, beneficence, transparency, and accountability but are adapted to reflect the communal and relational values inherent in Ubuntu. The framework aims to ensure that AI-driven health research benefits communities equitably, respects local contexts and promotes long-term sustainability. Integrating Ubuntu into AI health research ethics can address the limitations of current frameworks that emphasise individualism. This approach not only aligns with African values but also offers a model that could be applied more broadly to enhance the ethical governance of AI in health research worldwide. By prioritising communal well-being, inclusivity, and environmental stewardship, the proposed framework has the potential to foster more responsible and contextually relevant AI health research practices in Africa.

  • Research Article
  • Cite Count Icon 68
  • 10.1007/s11948-021-00336-3
Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review.
  • Sep 3, 2021
  • Science and engineering ethics
  • Magali Goirand + 2 more

A number of Artificial Intelligence (AI) ethics frameworks have been published in the last 6years in response to the growing concerns posed by the adoption of AI in different sectors, including healthcare. While there is a strong culture of medical ethics in healthcare applications, AI-based Healthcare Applications (AIHA) are challenging the existing ethics and regulatory frameworks. This scoping review explores how ethics frameworks have been implemented in AIHA, how these implementations have been evaluated and whether they have been successful. AI specific ethics frameworks in healthcare appear to have a limited adoption and they are mostly used in conjunction with other ethics frameworks. The operationalisation of ethics frameworks is a complex endeavour with challenges at different levels: ethics principles, design, technology, organisational, and regulatory. Strategies identified in this review are proactive, contextual, technological, checklist, organisational and/or evidence-based approaches. While interdisciplinary approaches show promises, how an ethics framework is implemented in an AI-based Healthcare Application is not widely reported, and there is a need for transparency for trustworthy AI.

  • Research Article
  • 10.1007/s10278-025-01656-7
Ethical Considerations in Patient Privacy and Data Handling for AI in Cardiovascular Imaging and Radiology.
  • Sep 24, 2025
  • Journal of imaging informatics in medicine
  • Saba Mehrtabar + 7 more

The integration of artificial intelligence (AI) into cardiovascular imaging and radiology offers the potential to enhance diagnostic accuracy, streamline workflows, and personalize patient care. However, the rapid adoption of AI has introduced complex ethical challenges, particularly concerning patient privacy, data handling, informed consent, and data ownership. This narrative review explores these issues by synthesizing literature from clinical, technical, and regulatory perspectives. We examine the tensions between data utility and data protection, the evolving role of transparency and explainable AI, and the disparities in ethical and legal frameworks across jurisdictions such as the European Union, the USA, and emerging players like China. We also highlight the vulnerabilities introduced by cloud computing, adversarial attacks, and the use of commercial datasets. Ethical frameworks and regulatory guidelines are compared, and proposed mitigation strategies such as federated learning, blockchain, and differential privacy are discussed. To ensure ethical implementation, we emphasize the need for shared accountability among clinicians, developers, healthcare institutions, and policymakers. Ultimately, the responsible development of AI in medical imaging must prioritize patient trust, fairness, and equity, underpinned by robust governance and transparent data stewardship.

  • Conference Article
  • 10.23919/mipro55190.2022.9803678
Artificial Intelligence Regulation in the Areas of Data Protection, Information Security, and Anti-discrimination in Western Balkan Economies
  • May 23, 2022
  • Djordje Krivokapic + 2 more

In order to improve trust related to the implementation of artificial intelligence (AI), European Union (EU) institutions are in the final stages of developing an ethical and legal framework. In the meantime, Western Balkan (WB) economies (Albania, Bosnia and Herzegovina, Macedonia, Montenegro, Serbia and Kosovo) are pushing for the implementation of advanced information technologies and artificial intelligence, particularly in the public sector. This paper aims to provide an overview of regulatory approaches toward AI in WB economies, which are in process of EU integration but usually do lack institutional capacities and need further strengthening of rule of law. The paper will firstly introduce the concept of regulation and the challenges of the regulability of IT and AI. Secondly, the paper will map key actors and stakeholders in WB and analyze the development of strategic documents, ethical frameworks, and legal regulations in the area of AI. Thirdly the paper will briefly compare the existing regulatory framework of WB economies in the areas of Data Protection, Information Security, and Anti-discrimination and its applicability to the implementation of AI technologies. Lastly, by presenting key principles of the EU approach to the regulation of AI the paper will provide recommendations for Western Balkan economies.

  • Research Article
  • 10.1080/10291954.2025.2523661
Building an ethical artificial intelligence corporate governance framework for the integration of emerging technologies into business processes
  • Jul 24, 2025
  • South African Journal of Accounting Research
  • Husain Coovadia + 3 more

Purpose To develop an ethical artificial intelligence (AI) corporate governance (CG) framework to guide South African business leaders in deploying and integrating AI into business processes, thus providing practical guidance to ensure responsible, transparent, and stakeholder-centric AI adoption. Motivation AI governance remains largely underdeveloped across Africa, particularly in South Africa, where businesses experience significant dilemmas in adopting and implementing an ethical AI framework. This study addresses that gap by developing a structured approach to ethical AI CG that supports responsible business practices. Design/Methodology/Approach A sequential mixed-methods approach was employed, combining a systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, with quantitative insights from a questionnaire. Main findings The study identified five essential elements for an effective ethical AI CG framework, namely transparency, machine bias, privacy, beneficial AI, and responsible AI, all of which must be stakeholder-centric. Practical implications A robust ethical CG framework tailored for South African business environments will encourage ethical AI adoption, strengthen adherence to the King IV Code, and enhance stakeholder trust while mitigating AI-driven risks inherent in technologies. The study emphasises continuous monitoring, stakeholder engagement, and compliance with legal frameworks like the Protection of Personal Information Act (POPIA). While the framework is tailored for South Africa, its principles can enjoy broader applications in other African business contexts. Novelty/Contribution This study developed a novel ethical AI CG framework for South African businesses using a sequential mixed-methods approach incorporating stakeholder views.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.