The Convergence of AI and Communication Studies: A Normative Perspective

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

AI technologies present both opportunities and risks to post-secondary institutions, requiring educators and students to reevaluate their epistemologies and practical guidelines at this particular juncture. The aim of the paper is not to propose specific curricular models for communication in the AI era. Instead, it seeks to offer normative considerations that communication scholars and students must reflect on. By engaging with a handful of core areas—creativity, creative thinking, AI ethics, interdisciplinarity, and originality—it intends to highlight practices that should guide both research and teaching. These dimensions are not discrete but interdependent, working together to inform not only curricula development but also broader normative guidance in the age of AI.

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.2139/ssrn.3796799
Trustworthy AI Implementation (TAII) Framework for AI Systems
  • Jan 1, 2021
  • SSRN Electronic Journal
  • Josef Baker-Brunnbauer

Companies and their stakeholder need practical tools and implementation guidelines besides abstract frameworks to kick off the realization of Artificial Intelligence (AI) ethics. Based on my previous research outcome AI development companies are still in the beginning of this process or have not even started yet. How is it possible to decrease the entry level barrier to kickoff AI ethics implementation? I tackle this question by combining AI ethics research with previous research findings to create the Trustworthy AI Implementation (TAII) framework. A literature review was conducted and that specifies the research and implementation status for each process step. The aim is to kickoff AI ethics and to transfer research and abstract guidelines from academia to business. The TAII process generates a meta perspective on the systemic dependencies of ethics for the company ecosystem. It generates orienteering for the AI ethics kickoff without requiring a deep background in philosophy and considers perspectives of social impact outside the software and data engineering setting. Depending on the legal regulation or area of application, the TAII process can be adapted and used with different regulations and ethical principles.

  • Research Article
  • 10.46392/kjge.2023.17.5.241
A Case Study on the Use of the Photovoice Method for the Empathy Education of College Students : Focusing on the class ‘Exploring Human Relations and Emotions in the AI Era’ at D University
  • Oct 31, 2023
  • The Korean Association of General Education
  • Yunseo Iem

The purpose of this study was to apply the photovoice method, a participatory action research method, to an actual class in order to expand empathy education for college students. To this end, the study examined where students developed an ability to empathize and then to provide implications. Photovoice enables students to develop a balanced growth of personal and social empathy by sharing personal experiences and by exploring social action through photos provided by research participants. A project utilizing the photovoice method was presented in the course ‘Exploring Human Relations and Emotions in the AI Era’, a liberal arts class at D University located in Seoul. The class was attended by 30 students and digital tools were used mainly in offline classes. The instructor introduced the main topic of the photovoice project, ‘Loneliness of Youth in the AI Era’ and the students carried out photovoice activities in teams. As a result of the activities, students stated that they developed their ability to understand the thoughts and feelings of others in the areas of personal empathy and in their ability to communicate empathetically. Also, in the process of sharing the results of photovoice, the social empathy capacity of students to solve social problems and seek policy alternatives improved. Through the results of this study, practical guidelines will be put forth so that the photovoice method can be used in a wider variety of college classes.

  • Research Article
  • Cite Count Icon 1
  • 10.1002/aaai.12092
AAAI 23 Spring Symposium Report on “Socially Responsible AI for Well‐Bing”
  • Jun 1, 2023
  • AI Magazine
  • Takashi Kido + 1 more

The AAAI 2023 spring symposium on “Socially Responsible AI for Well-Being” was held at Hyatt Regency San Francisco Airport, California, from March 27th to 29th. AI has great potential for human well-being but also carries the risk of unintended harm. For our well-being, AI needs to fulfill social responsibilities such as fairness, accountability, transparency, trust, privacy, safety, and security, not just productivity such as exponential growth and economic and financial supremacy. For example, AI diagnostic systems must not only provide reliable results (for example, highly accurate diagnostic results and easy-to-understand explanations) but also their results must be socially acceptable (for example, data for AI [machine learning] must not be biased (the amount of data for training must not be biased by race or location (for example, the amount of data for learning must be equal across races and locations). As in this example, AI decisions affect our well-being, suggesting the importance of discussing “what is socially responsible” in several potential well-being situations in the coming AI era. The first perspective is “(Individual) Responsible AI” and aims to identify what mechanisms and issues should be considered to design responsible AI for well-being. One of the goals of responsible AI for well-being is to provide accountable outcomes for our ever-changing health conditions. Since our environment often drives these changes in health, Responsible AI for Well-Being is expected to offer responsible outcomes by understanding how our digital experiences affect our emotions and quality of life. The second perspective is “Socially Responsible AI,” which aims to identify what mechanisms and issues should be considered to realize the social aspects of responsible AI for well-being. One aspect of social responsibility is fairness, that is, that the results of AI should be equally helpful to all. The problem of “bias” in AI (and humans) needs to be addressed to achieve fairness. Another aspect of social responsibility is the applicability of knowledge among people. For example, health-related knowledge found by an AI for one person (for example, tips for a good night's sleep) may not be helpful to another person, meaning that such knowledge is not socially responsible. To address these problems, we must understand how fair is fair and find ways to ensure that machines do not absorb human bias by providing socially responsible results. Our symposium included 18 technical presentations over 2-and-a-half days. Presentation topics included (1) socially responsible AI, (2) communication and evidence for well-being, (3) face expression and impression for well-being, (4) odor for well-being, (5) ethical AI, (6) robot Interaction for social well-being, (7) communication and sleep for social well-being, (8) well-being studies, (9) information and sleep for social well-being For example, Takashi Kido, Advanced Comprehensive Research Organization of Teikyo University in Japan, presented the challenges of socially responsible AI for well-being. Oliver Bendel, School of Business FHGW in Switzerland, presented the increasing well-being through robotic hugs. Martin D. Aleksandrov, Freie Universitat Berlin in Germany, presented the limiting inequalities in the fair division with additive value preferences for indivisible social items. Melanie Swan, University College London in the United Kingdom, presented Quantum intelligence, responsible human machine entities. Dragutin Petkovic, San Francisco State University in Unites States, presented on San Francisco State University Graduate Certificate in Ethical AI. Our symposium provides participants unique opportunities where researchers with diverse backgrounds can develop new ideas through innovative and constructive discussions. This symposium will present significant interdisciplinary challenges for guiding future advances in the AI community. Takashi Kido and Keiki Takadama served as co-chairs of this symposium. The papers of the symposium will be published online at CEUR-WS.org. The authors declare no conflicts of interest. Takashi Kido is a professor at Teikyo University in Japan. He had been a visiting researcher at Stanford University. Keiki Takadama is a professor at the University of Electro-Communications in Japan.

  • Book Chapter
  • 10.1007/978-3-030-56134-5_9
Neuromodulation of the “Moral Brain” – Evaluating Bridges Between Neural Foundations of Moral Capacities and Normative Aims of the Intervention
  • Jan 1, 2020
  • Christian Ineichen + 1 more

The question of whether neuroscience has normative implications or not becomes practically relevant when neuromodulation technologies are used with the aim of pursuing normative goals. The historical burden of such an endeavor is grave and the current knowledge of the neural foundations of moral capacities is surely insufficient for tailored interventions. Nevertheless, invasive and non-invasive neuromodulation techniques are increasingly used to address complex health disturbances and are even discussed for enhancement purposes, whereas both aims entail normative objectives. Taking this observation as an initial position, our contribution will pursue three aims. First, we summarize the potential of neuromodulation techniques for intervening into the “moral brain” using deep brain stimulation as a paradigmatic case and show how neurointerventions are changing our concepts of agency and personality by providing a clearer picture on how humans function. Second, we sketch the “standard model” explanations with respect to ethically justifying such interventions, which rely on a clear separation between normative considerations (“setting the goals of the intervention” or “the desired condition”) and empirical assessments (“evaluating the outcome of the intervention” or “the actual condition”). We then analyze several arguments that challenge this “standard model” and provide bridges between the empirical and normative perspective. We close with the observation that maintaining an analytical distinction between the normative and empirical perspective is reasonable, but that the practical handling of neuromodulation techniques that involve normative intervention goals is likely to push such theoretical distinctions to their limits.KeywordsNeuromodulationDeep brain stimulationIs-ought-gapAgencyPersonalitySelf-regulation

  • Research Article
  • 10.1177/14727978251364457
Impact of AI-generated imagery on foundation course design in industrial design education: An empirical study of curriculum value transformation
  • Jul 28, 2025
  • Journal of Computational Methods in Sciences and Engineering
  • Yiwei Jiang + 4 more

With the widespread application of AI image generation technology in higher education design fields, traditional design education models face the necessity of reevaluation. This study aims to explore how design aesthetic features (as objective product attributes) influence designers’ creative thinking and design expression (as subjective capabilities), and accordingly reassess the educational value of foundational design courses in the AI era. Using a comparative experimental method, the research recruited 25 first-year and 25 third-year industrial design students to create product designs using Midjourney, with 36 industrial design experts systematically evaluating the works. Results indicate that design aesthetic features significantly impact design expression more than design thinking, and the two student groups demonstrate notable differences in design element application: novice design students primarily express creativity through intuitive visual elements such as product patterns and product appearance, while advanced students more effectively utilize professional design elements like form contours and material textures, reflecting how design education facilitates students' transition from perceptual cognition to rational analysis. Additionally, the positive correlation between creative thinking and design expression strengthens with deepening design education, indicating a mutually reinforcing relationship. Based on these findings, the paper suggests that foundational design courses in the AI-generated imagery era need repositioning: color and expression courses should shift from basic skill training to high-level theoretical education, creative thinking courses significantly increase in importance, form and material courses maintain core value but need content updates aligned with AI characteristics, while human-computer collaborative design should become a new curricular direction. This study provides an empirical foundation for design education reform in the AI era, emphasizing the importance of understanding design essentials and cultivating innovative thinking.

  • Research Article
  • 10.34190/icer.2.1.4008
Algorithmic Teaching, Fading Thought? Rethinking Engagement in the AI Era
  • Oct 31, 2025
  • International Conference on Education Research
  • Melisa Chawaremera

As artificial intelligence becomes increasingly embedded in educational environments, the promise of enhanced efficiency, personalised instruction and expanded access to knowledge is celebrated globally. However, the pedagogical implications of algorithmic instruction remain under theorised, particularly with regard to critical thinking and epistemic engagement. This paper interrogates how algorithmic-driven content delivery and automated assessment systems may inadvertently narrow intellectual curiosity and encourage conformity while reducing learners to be passive recipients of information. Through a doctrinal study into interdisciplinary literature in education, philosophy and AI ethics, this paper critically analyses how AI tools that are used in instructional design may entrench a form of educational minimalism that prioritises standardisation over inquiry. While artificial intelligence can personalise learning pathways, it also risks eliminating opportunities for open-ended exploration and problem solving. Through comparative insights from South Africa and the UK, the study reveals how algorithmic learning environments can either support or suppress higher order thinking depending on contextual use and pedagogical design. This paper calls for a deliberate reconfiguration of AI-enabled education towards epistemically rich engagement. This is where learners are positioned as co-constructors of knowledge. It proposes a model of “algorithmic dialogism” that blends AI support with critical pedagogy. This will ensure that technology develops as a tool for liberation rather than control. This contribution aligns with ongoing global debates on the ethics of AI in education and seeks to influence curriculum design that furthers curiosity, dialogue and reflective thinking in the digital age. Ultimately, this paper calls for a shift from efficiency-driven instruction to education that values diversity of thought and the cultivation of critical consciousness.

  • Research Article
  • 10.14251/jscm.2023.10.33
Dysfunction of Artificial Intelligence in the 4th Industrial Revolution Era and Crisis Management Direction
  • Oct 31, 2023
  • Crisis and Emergency Management: Theory and Praxis
  • Young Beom Kim

The purpose of this study is to examine the problems that AI will bring in the era of the 4th Industrial Revolution and suggest crisis management directions to overcome them. The main results are as follows. ① We must continue to create new jobs that only humans can do, focusing on personality and emotion, to replace jobs being eroded by AI. ② Enact correct standards and laws regarding the development and use of AI technology. There is a need to continue discussions focusing on problems that may arise in advance regarding the development of high-risk, mass-destructive AI weapons and the misuse of AI. ③ In the AI era, specific standards and laws must be established on how AI will make human-centered decisions by prioritizing human dignity in various situations. ④ Establishment of AI ethics. Preparing for the possibility that super-artificial intelligence will destroy humanity in the future should be treated as a common agenda with countries around the world participating. In order to prevent adverse effects of AI that threaten the survival of humanity, it is necessary to establish AI ethics based on human dignity.

  • Research Article
  • Cite Count Icon 1
  • 10.2307/1319956
Human Behavior: Its Implications for Curriculum Development in Art
  • Jan 1, 1971
  • Studies in Art Education
  • Donald Jack Davis

Since the first attempt to teach art in the schools, curriculum development in the arts has been a primary concern; any time one deals with the content and direction of a teaching learning situation, one must face the basic issues of curriculum development. Those issues are three: stating objectives, devising plans for implementing them and, in some cases, attempting to evaluate the achievement of those objectives. It is with the first and last of these curriculum development concerns - stating objectives and evaluating the achievement of these objectives - that I would like to deal. As the history of art education tells us, curriculum development in the arts has taken many turns. Early efforts at developing a curriculum for the training of eyehand coordination moved toward more industrially oriented art curricula in the 1850's. The late 19th century saw culturally oriented art programs; the early 20th century, child centered art curricula. In the 1930's the emphasis was on good taste. The 1950's saw increasing emphasis laid on creative thinking. Today the major curriculum concerns of arts educators are interrelating the arts or aesthetic education. Large scaled curriculum development projects, as popular as they are today, are not new. As early as 1933, Dean Melvin Haggarty and his staff at the University of Minnesota launched a major curriculum development project in Owatonna, Minnesota. Funded by the Carnegie Corporation and based upon the premise that art is a way of life, the Owatonna Art Education Project attempted to implement art in daily living. Working in the schools as well as in the community, the project staff endeavored to channel the natural aesthetic interests of the people of this representative, small, midwestern city into formal and informal art activities.' Although interrupted by the war, the project still had a significant impact upon art curricula around the country as evidenced in the many art activities and projects attempting to deal with art in daily living. To take but one example of its influence upon the field, one might look at the well-known book Art Today,2 which was a direct outgrowth of this project. Its authors have done a brilliant job of reflecting the philosophy of the book - art today - in its quality. For a period of years, the field of art education did not engage in any major curriculum development projects, although many thousands of art teachers and supervisors across the country were engaged in curriculum development on a local or state level. Evidence of such endeavors is the many and varied art curriculum guides which are in use across the country today. With the increased interest of the government and private foundations in the arts during the past decade and the subsequent increase in federal and private money for curriculum research and development, we have experienced large-scale and sustained

  • Research Article
  • 10.54392/ajir2443
Traversing the Ethical Terrain of AI: A Conceptual Framework for Practicing Research and Publication in the AI Era
  • Dec 30, 2024
  • Asian Journal of Interdisciplinary Research
  • Akila S + 3 more

The ethical concerns of AI research and dissemination must be carefully considered in accordance with the large impact of AI on communities and enterprises. This research explores the complicated ethical framework that has been impacted by recent advancements in AI. It examines concerns raised in research and publication through an extensive literature on AI ethics. Researchers, academics, and politicians can address the challenges of AI research and dissemination with the help of this research guide. It highlights the need of guidelines for ensuring responsible and ethical behaviour AI research. This guide is an important resource for AI stakeholders. It promotes an ethical and responsible culture in the rapidly evolving field of AI research and publication.

  • Research Article
  • 10.38159/ehass.20256621
AI-Driven Leadership in Educational Policy: A Systematic Literature Review
  • May 23, 2025
  • E-Journal of Humanities, Arts and Social Sciences
  • Dean Collin Langeveldt

This article reviewed Langeveldt’s 2024 framework for educational policy. It highlights the role of Artificial Intelligence (AI) in making data-driven decisions easier. Policymakers can use AI to improve resources, curriculum, and policy choices. The paper analysed AI’s ability to sift through large data sets, spot trends, and predict outcomes. It also discussed challenges such as data privacy, bias, and unexpected effects. The study was based on theories such as constructivism and ethical AI. It aims to understand AI’s role in education. It shows examples in which AI has improved student performance and resource use. The research method included a review of existing studies on AI in education. It also offered solutions for schools facing resource constraints. The article ended with a call for AI-driven policies that are effective, fair, and adaptable. It found that AI can indeed improve policies by providing clearer insights and predictions. Finally, it suggests future research to explore AI’s evolving impact on education. Keywords: AI, Educational Policy, Data-Driven Decision Making, Curriculum Development, Ethical AI

  • Single Report
  • 10.54678/tdpd6847
Readiness assessment methodology. A tool of the Recommendation on the Ethics of Artificial Intelligence. A tool of the Recommendation on the Ethics of Artificial Intelligence
  • Jan 1, 2025

The Readiness assessment methodology (RAM) is a macro level instrument that will help countries understand where they stand on the scale of preparedness to implement AI ethically and responsibly for all their citizens, in so doing highlighting what institutional and regulatory changes are needed. The outputs of the RAM will help UNESCO tailor the capacity building efforts to the needs of specific countries. Capacity here refers to the ability to assess AI systems in line with the Recommendation, the presence of requisite and appropriate human capital, and infrastructure, policies, and regulations to address the challenges brought about by AI technologies and ensure that people and their interests are always at the center of AI development. In November 2021, the 193 Member States of UNESCO signed the Recommendation on the Ethics of Artificial Intelligence, the first global normative instrument in its domain. The Recommendation serves as a comprehensive and actionable framework for the ethical development and use of AI, encompassing the full spectrum of human rights. It does so by maintaining focus on all stages of the AI system lifecycle. Beyond elaborating the values and principles that should guide the ethical design, development and use of AI, the Recommendation lays out the actions required from Member States to ensure the upholding of such values and principles, through advocating for effective regulation and providing recommendations in various essential policy areas, such as gender, the environment, and communication and information. The Recommendation mandated the development of two key tools, the Readiness Assessment Methodology (RAM) and the Ethical Impact Assessment (EIA), which form the core pillars of the implementation. These tools both aim to assess and promote the resilience of existing laws, policies and institutions to AI implementation in the country, as well as the alignment of AI systems with the values and principles set out in the Recommendation. The goal of this document is to provide more information on the Readiness Assessment Methodology, lay out its various dimensions, and detail the work plan for the implementing countries, including the type of entities that need to be involved, responsibilities of each entity, and the split of work between UNESCO and the implementing country. UNESCO Catno: 0000385198 Doc code: SHS/REI/BIO/REC-AIETHICS-TOOL/AR https://unesdoc.unesco.org/ark:/48223/pf0000385198_ara

  • Research Article
  • 10.31866/2617-796x.7.2.2024.317738
Discourse Around the Ethics of Artificial Intelligence: Features of Formation and Institutionalization
  • Dec 16, 2024
  • Digital Platform: Information Technologies in Sociocultural Sphere
  • Yuliia Trach

The purpose of the article is to identify the features of the formation of the discourse around the ethics of artificial intelligence, as well as to characterize the main legal acts that set out its key principles. Research methods. The methods of analysis and synthesis, generalization and abstraction were applied, which made it possible to achieve the set goal. The scientific novelty lies in identifying the features of the public (determined by the strategies and limitations characteristic of the media arena) and academic (long, deep and reasoned discussions of researchers) discourses around the ethics of artificial intelligence, clarifying the main approaches (risk-oriented, according to the areas of application) to setting out its key principles in legal acts at the international level, emphasizing the need to improve the tools for managing AI technologies, in particular the creation of global governance structures to prevent their misuse. Conclusions. It is emphasized that, given the steady growth in the scale of data and AI use worldwide, it is necessary to systematically make efforts to increase literacy, awareness and education about the ethical consequences of the use of AI technologies. Ethical challenges associated with different ways of using AI require interdisciplinary interaction and interaction with many stakeholders, as well as cooperation between cultures, organizations, academic institutions, etc. By directly addressing the ethical issues surrounding the development and use of AI, collaboration between policymakers, technologists, and ethicists can ensure that AI serves humanity responsibly and fairly. It is emphasized that, despite the promotion of policy approaches to regulating AI by some countries and international organizations, the impact of corporate investment in AI and the political responses associated with governance have yet to be assessed.

  • Research Article
  • 10.7037/jnttc.200506.0185
國小「社區藝術地圖」課程與創新教學之行動研究
  • Jun 1, 2005
  • 黃嘉勝

國小「社區藝術地圖」課程與創新教學之行動研究

  • Research Article
  • 10.34190/icair.5.1.3017
Artificial intelligence and the ethics of tomorrow
  • Dec 4, 2024
  • International Conference on AI Research
  • Brenda Van Wyk + 1 more

Traversing our digital information society safely and responsibly rests mainly on our comprehension of the vast sociotechnical nature of AI ethics risks, its implications and consequences. Ultimately, we all would prefer to live in a mature information society that is technologically just, inclusive and sophisticated, firmly rooted in ethical information philosophy and values. In this paper the findings of a scoping review of recent reported research look, in particular, at the sociotechnical changes and impact that disruptive AI innovation has on societies, and how this could impact new and futuristic nuances in AI ethics. The study delves into the interdisciplinarity of AI ethics. The role of intergovernmental collaboration in researching and availing frameworks and guardrails in upholding AI ethics is critically interrogated and explored. The study alludes to gaps in current research around AI ethics and impresses the need to deliberate on future AI ethics dimensions. The prerequisites for fostering further confidence and trust in AI technology are synthesised. The study concluded that inclusivity and justice in AI Ethics is not yet achieved on a global level, and that there is still a tendency towards cultural and other biases in designing, planning, implementing and also regulating AI. More research is needed on the impact and trends of AI innovation in the Global South compared to the Global North.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 64
  • 10.1007/s43681-021-00122-8
Blind spots in AI ethics
  • Dec 9, 2021
  • AI and Ethics
  • Thilo Hagendorff

This paper critically discusses blind spots in AI ethics. AI ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and privacy. All these principles can be framed in a way that enables their operationalization by technical means. However, this requires stripping down the multidimensionality of very complex social constructs to something that is idealized, measurable, and calculable. Consequently, rather conservative, mainstream notions of the mentioned principles are conveyed, whereas critical research, alternative perspectives, and non-ideal approaches are largely neglected. Hence, one part of the paper considers specific blind spots regarding the very topics AI ethics focusses on. The other part, then, critically discusses blind spots regarding to topics that hold significant ethical importance but are hardly or not discussed at all in AI ethics. Here, the paper focuses on negative externalities of AI systems, exemplarily discussing the casualization of clickwork, AI ethics’ strict anthropocentrism, and AI’s environmental impact. Ultimately, the paper is intended to be a critical commentary on the ongoing development of the field of AI ethics. It makes the case for a rediscovery of the strength of ethics in the AI field, namely its sensitivity to suffering and harms that are caused by and connected to AI technologies.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon