A Study on Generative AI in the China Media Setting Contemplated with the Nation's Economic Modernisation
This research looks at the potential effects of generative artificial intelligence AI on the country's media landscape. Given their pervasiveness, it aims to reveal how AI-powered technologies in media content creation, distribution, and personalisation contribute to the overall process of national progress. Using well-designed questionnaires, the study quantitatively collects data from media professionals, techies, and communication scholars in large cities throughout China. Using statistical tools such as structural equation modelling and regression analysis, one investigated the interplay between the rate of modernisation, the effects of national development, and AI-driven media innovation. Media indices of generative AI demonstrate a clear positive correlation with the effect of modernism and national development programs. As China strives to digitally change its communication infrastructure and increase its cultural influence, technological prowess, and media production, generative AI is playing an increasingly crucial role. This study shows that AI in media may lead to more dynamic stories, practical audience participation, and worldwide outreach, all thanks to modernist techniques. There is no part of this that does not contribute to the advancement of national development goals. The results provide policymakers, media outlets, and AI developers with valuable information for formulating strategies to integrate AI with sustainable development objectives. Via an experimental interaction between generative AI and national development perceived via a modernist lens, this study provides a framework for future research on new media technologies and national change. The discussion of the societal potential presented by AI may now begin.
- Research Article
1
- 10.30884/jfio/2023.04.01
- Dec 30, 2023
- Философия и общество
The article is devoted to the history of the development of ICT and AI, their current and expected future achievements, and the problems (which have already arisen but will become even more acute in the future) assiciated with the development of these technologies and their widespread application in society. It shows the close connection between the development of AI and cognitive science, the penetration of ICT and AI into various spheres, particularly health care, and the very intimate areas related to the creation of digital copies of the deceased and posthumous contact with them. A significant part of the article is devoted to the analysis of the concept of “artificial intelligence”, including the definition of generative AI. The authors analyse recent achievements in the field of Artificial Intelligence. There are given descriptions of the basic models, in particular the Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that await us in the coming decades. The authors identify the forces behind the aspiration to create AI, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. It is emphasized that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article provides forecasts of the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, published in the previous issue of the journal, provided a brief historical overview and characterized the current situation in the field of ICT and AI. It also analyzed the concepts of artificial intelligence, including generative AI, changes in the understanding of AI in connection with the emergence of the so-called large language models and related new types of AI programs (ChatGPT and similar models). The article discussed the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. This second article describes and comments on current assessments of breakthroughs in the field of AI, analyzes various predictions, and provides the authors’ own assessments and predictions of future developments. Particular attention is paid to the problems and dangers associated with the rapid and uncontrolled development of AI, with the fact that advances in this field are becoming a powerful means of control over the population, imposing ideology, priorities and lifestyles, influencing the results of elections, and a tool to undermine security and geopolitical struggles.
- Research Article
3
- 10.30884/seh/2024.02.07
- Sep 30, 2024
- Social Evolution & History
The article is devoted to the history of the development of ICT and AI, their current and expected future achievements, and the problems (which have already arisen but will become even more acute in the future) associated with the development of these technologies and their widespread application in society. It shows the close connection between the development of AI and cognitive science, the penetration of ICT and AI into various spheres, particularly health care, and the very intimate areas related to the creation of digital copies of the deceased and posthumous contact with them. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. The authors analyse recent achievements in the field of Artificial Intelligence. There are given descriptions of the basic models, in particular the Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that await us in the coming decades. The authors identify the forces behind the aspiration to create AI, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. It is emphasized that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article provides forecasts of the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, published in the previous is-sue of the journal, has provided a brief historical overview and characterized the current situation in the field of ICT and AI. It has also analyzed the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT and similar models). The article has discussed the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. This second article describes and comments on the current assessments of breakthroughs in the field of AI, analyzes various predictions, and provides the authors' own assessments and predictions of future developments. Particular attention is paid to the problems and dangers associated with the rapid and uncontrolled development of AI, with the fact that advances in this field become a powerful means of control over the population, imposing ideologies, priorities and lifestyles, influencing the results of elections, and a tool to undermine security and geopolitical struggles.
- Conference Article
- 10.54941/ahfe1005735
- Jan 1, 2024
As the future commercial development direction of digital art, AI painting provides designers with creative auxiliary functions. At the same time, it also raises a series of issues in the operation of business models, such as: How to make the generative AI industry sustainable in the domestic market? Business model? How does generative AI take root in Chinese society? What sustainability-oriented generative AI operations can be applied in bottom-up business models in the market?This study analyzes how the concept of sustainability is scientifically applied in the generative AI business model in the Chinese market. First, it conducts user research from the perspective of professional users in the creative design field who use generative AI the most. Divide users into three categories based on their usage behavior and needs: senior users, general users, design art students and other potential users. Conduct in-depth interviews, questionnaire surveys, and live interactive data collection for these three categories of users respectively, and process them through analysis From the above data, we can derive the corresponding needs of different users. Then the business model canvas proposed by Ostwald & Pinel (2009) was used as the theoretical basis of the research, and a customer (user)-centered semantic replacement of various elements related to the entire business model was constructed. , and automatically screened out influential variables through a stepwise regression model, and improved the design of the business model based on this business model structure. In this way, the demand analysis of three types of people for generative AI and the relationship model between user factors and business impact of generative AI were constructed. Based on the O2O development model, we integrate the existing resources of generative AI and drive continuous iteration and innovation at all levels such as product design, service model and user experience according to customer (user) needs, thus promoting the development of generative AI in China. Sustainable development, this research has certain reference value for the future development of AI industry business models in China in other fields.
- Research Article
1
- 10.7759/cureus.78257
- Jan 30, 2025
- Cureus
The advent of Generative Artificial Intelligence (Generative AI or GAI) marks a significant inflection point in AI development. Long viewed as the epitome of reasoning and logic, Generative AI incorporates programming rules that are normative. However, it also has a descriptive component based on its programmers' subjective preferences and any discrepancies in the underlying data. Generative AI generates both truth and falsehood, supports both ethical and unethical decisions, and is neither transparent nor accountable. These factors pose clear risks to optimal decision-making in complex health services such as health policy and health regulation. It is important to examine how Generative AI makes decisions both from a rational, normative perspective and from a descriptive point of view to ensure an ethical approach to Generative AI design, engineering, and use. The objective is to provide a rapid review that identifies and maps attributes reported in the literature that influence Generative AI decision-making in complex health services. This review provides a clear, reproducible methodology that is reported in accordance with a recognised framework and Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 standards adapted for a rapid review. Inclusion and exclusion criteria were developed, and a database search was undertaken within four search systems: ProQuest, Scopus, Web of Science, and Google Scholar. The results include articles published in 2023 and early 2024. A total of 1,550articles were identified. After removing duplicates, 1,532articles remained. Of these, 1,511 articles were excluded based on the selection criteria and a total of 21 articles were selected for analysis. Learning, understanding, and bias were the most frequently mentioned Generative AI attributes. Generative AI brings the promise of advanced automation, but carries significant risk. Learning and pattern recognition are helpful, but the lack of a moral compass, empathy, consideration for privacy, and a propensity for bias and hallucination are detrimental to good decision-making. The results suggest that there is, perhaps, more work to be done before Generative AI can be applied to complex health services.
- Supplementary Content
1
- 10.1007/s12194-025-00968-1
- Jan 1, 2025
- Radiological Physics and Technology
In recent years, generative AI has attracted significant public attention, and its use has been rapidly expanding across a wide range of domains. From creative tasks such as text summarization, idea generation, and source code generation, to the streamlining of medical support tasks like diagnostic report generation and summarization, AI is now deeply involved in many areas. Today’s breadth of AI applications is clearly distinct from what was seen before generative AI gained widespread recognition. Representative generative AI services include DALL·E 3 (OpenAI, California, USA) and Stable Diffusion (Stability AI, London, England, UK) for image generation, ChatGPT (OpenAI, California, USA), and Gemini (Google, California, USA) for text generation. The rise of generative AI has been influenced by advances in deep learning models and the scaling up of data, models, and computational resources based on the Scaling Laws. Moreover, the emergence of foundation models, which are trained on large-scale datasets and possess general-purpose knowledge applicable to various downstream tasks, is creating a new paradigm in AI development. These shifts brought about by generative AI and foundation models also profoundly impact medical image processing, fundamentally changing the framework for AI development in healthcare. This paper provides an overview of diffusion models used in image generation AI and large language models (LLMs) used in text generation AI, and introduces their applications in medical support. This paper also discusses foundation models, which are gaining attention alongside generative AI, including their construction methods and applications in the medical field. Finally, the paper explores how to develop foundation models and high-performance AI for medical support by fully utilizing national data and computational resources.
- Research Article
1
- 10.1016/j.actpsy.2025.105791
- Nov 1, 2025
- Acta psychologica
Association between Generative AI self-efficacy and Generative AI acceptance: The mediating role of Generative AI trust and the moderating role of Generative AI risk perception.
- Book Chapter
- 10.4018/979-8-3693-3278-8.ch009
- Jun 28, 2024
This study examines the impact of Python-driven generative AI on media content creation and its ethical implications. Python's simplicity and extensive libraries have made it pivotal in AI development, enabling the generation of realistic content across various media formats. While these advancements promise significant enhancements in content creation efficiency and personalization, they also raise complex ethical issues, including concerns over authenticity, copyright infringement, and misinformation. Through surveys and case studies, this research explores the technological capabilities of generative AI, its transformative potential in the media landscape, and the ethical dilemmas it presents. The chapter advocates for a balanced approach to leveraging AI in media, emphasizing the need for frameworks that promote responsible use, ensuring innovation aligns with ethical standards and societal values.
- Research Article
1
- 10.34190/icair.4.1.3025
- Dec 4, 2024
- International Conference on AI Research
The rapid development of generative AI (GenAI) raises new questions in higher education such as: What should be the university policy regarding GenAI? How ought courses be redesigned for fair and resilient assessment? What the added pedagogical and didactical values when involving GenAI in teaching and learning activities? Different universities have rapidly created and presented contradictory standpoints and draft policies, and teachers show different opinions regarding the pros and cons of GenAI. This study has been carried out with a student perspective, where 16 students have been examining their own Master's programme on sustainable information provision. Students have assessed the assessment in their previous courses in the Master's programme. The aim of the study is to investigate how sustainable course activities and assignment are, and to explore how GenAI tools might support and facilitate teaching and learning activities. Moreover, the students were given the task to test detection software on GenAI generated solutions to assignments in chosen Master's courses. Students conducted these tasks as a part of a 7.5 ECTS project course in the same Master's programme as the investigated courses are a part of. For inspiration and for background information on artificial intelligence to the project work students participated in the first Symposium on AI Opportunities and Challenges (SAIOC) in December 2023. Data have been gathered from reports of 3 group projects where 16 students have investigated 5 freely chosen courses in the programme in each group work. Beside from testing GenAI tools in existing activities and assignments students also interviewed the subject matter experts that are responsible for the chosen courses. Results were firstly analysed and presented in group reports, combined with 16 individual reflection essays. Regarding the individual essays students were instructed to bring up ethical perspectives on GenAI in higher education, and also to present and discuss suggestions for how the current course design and assignments better could be redesigned for improved sustainability and fairness. Finally, all the group reports and the individual reflection essays were thematically analysed by the author, who also is the subject matter expert and main teacher for the project course. Findings show that many of the existing assignments in the Master's programme could be partly solved with different GenAI tools. The AI generated solutions showed different levels of quality and correctness for different types of activities and assignments. An ethical concern that many student essays brought up was the relatively poor quality of the tested detection software. A question in one of the essays was if teachers should use detection software with an accuracy rate just above 50% to evaluate student submissions. The recommendations from both the students and the author are to provide clear instructions about when GenAI is allowed and not in course activities, and to redesign the course structure for continuous assessment. With or without GenAI tools, a continuous assessment where the whole study path through a course is assessed, and not only isolated submissions, would strengthen fairness and sustainability. Finally, several students suggest oral examinations as a complement to the existing assessment methods, even if their findings showed that GenAI tools can be used to prepare oral presentations.
- Research Article
- 10.34190/icair.4.1.3136
- Dec 4, 2024
- International Conference on AI Research
The rapid development of generative AI (GenAI) technologies in recent years has enabled new opportunities as well as new challenges in higher education. While many studies in computer science have focused on GenAI in programming education, fewer have examined its possibilities and challenges in requirements engineering (RE). This study aims to explore the impact of GenAI on the pedagogical aspects of RE in higher education, focusing on the student perspective, to analyse how GenAI might influence learning experiences, knowledge acquisition, and skill development. The main research question to answer was: "What are the students’ perspectives of the integration of GenAI in the educational practices of requirements engineering?" An Action research strategy was employed, with one of the authors also serving as teacher in the investigated course. A mixed-methods approach was used to collect both qualitative and quantitative data from workshops and surveys. During the workshops, students used ChatGPT to generate and evaluate software requirements and compared these to manually crafted requirements. Thematic analysis of the qualitative data captured students’ perspectives, while survey data identified trends and preferences. Findings show that while students generally had a positive experience with GenAI, valuing its efficiency and the quality of generated requirements, they also recognized the need for human oversight to maintain accuracy. The study highlights both opportunities and challenges of using GenAI in RE education. While GenAI increased learning engagement and helped with brainstorming, students faced difficulties in creating effective prompts and found it time-consuming to refine AI-generated requirements. A hybrid approach, combining AI-generated and manually created requirements, proved most effective by balancing AI's advantages with human insights. Further research is needed on how GenAI could be effectively integrated into computer science education.
- Research Article
- 10.34190/icair.4.1.3026
- Dec 4, 2024
- International Conference on AI Research
In the current spring of Artificial Intelligence, the rapid development of Generative AI (GenAI) has initiated vivid discussions in higher education. Opportunities as well as challenges have been identified and to cope with this new situation there is a need for a large-scale teacher professional development. With basic skills about GenAI teachers could use the new technology as an extension of the existing technology enhanced teaching and learning. The aim of this paper is to present and discuss the project FAITH (Frontline Application of AI and Technology-enhanced Learning for Transforming Higher Education). FAITH is a higher education pedagogical development initiative for institutional development for teachers with good fundamental skills in traditional pedagogy. A project with the overall objective of increasing the staff understanding of AI and to develop new competencies in the field of GenAI and technology enhanced learning. The research question that guided this study was: "What are the perceived opportunities, challenges and expectations of involving GenAI in higher education?" The overall research strategy for the FAITH project is design-based research, which involves iterative and cumulative development processes. In the early iteration that this study was a part of has been carried out inspired by Collective Autoethnography where members of the steering group behind the FAITH project, and members of the project team have constituted the main focus group. Data were collected by structured interviews where two GenAI tools also have been interviewed. Findings show that the expectations are high, but that the FAITH ambition of institutional development is depending on teachers’ motivation for taking an active part in the project. Another challenge could be that many teachers see GenAI as something that threatens the current course design, and that a general ban of GenAI is the appropriate solution. One of, several identified opportunities, is that a general revision of syllabi and assessment in an adaptation for GenAI enhanced learning would improve the current course design.
- Research Article
- 10.5840/npej20241217
- Jan 1, 2024
- Northern Plains Ethics Journal
Generative AI technologies have become increasingly prevalent in our day-to-day lives and demand an increasing share of our attention as they do so. Among the considerations brought to our attention is that of ethical development and use of generative AI. While a focus on such considerations as human decision-making capacities or intellectual property rights, for example, have entered the conversation, I believe there is a lack of attention concerning generative AI’s effect on human effort. This paper seeks to introduce, consider, and advise on the issue of generative AI’s theft of human effort; effort is important to the human experience, and generative AI stands to rob us of countless opportunities to practice such effort. We must employ a human-centered approach in the development and use of generative AI to ensure its effect on humanity is positive and ethical. Otherwise, we leave the human experience susceptible to considerable losses including that of human effort.
- Research Article
- 10.21732/skps.2023.111.83
- Jun 30, 2023
- Korean Publishing Science Society
ChatGPT is popular all over the world and shows the potential to exert a great influence on society as a whole. The importance of ChatGPT is that it became a catalyst for the public to directly experience generative AI services. Generative AI is already being evaluated as having enough potential to bring about many changes in all areas of society. The publishing sector is no exception. Rather, publishing is more likely to be impacted by generative AI than other fields. This study was conducted to know and prepare for how generative AI will affect the future of publishing. Among the currently existing generative AI services, marketing work, image, and audio fields were analyzed as those that can directly affect the publishing process. Subsequently, as a result of analyzing how generative AI can change writing, which is the core of publishing, it was found that generative AI is already being actively used in a wide spectrum of planning, question-and-answer, collaboration, and mixed content writing. The development and proliferation of generative AI is bringing new opportunities and challenges to publishing.
- Discussion
1
- 10.1080/13562517.2025.2497263
- May 20, 2025
- Teaching in Higher Education
The rapid development of Generative AI (GenAI) technologies has led to widespread endorsement of GenAI systems serving as a ‘personal tutor’ and learning ‘collaborator’ in higher education. However, because GenAI outputs are prone to ‘hallucinations,’ it has been suggested that students take responsibility for the accuracy of GenAI contributions to their learning. We rehabilitate Plato’s scepticism regarding writing and draw on Harry Frankfurt’s analysis of ‘bullshit’ to demonstrate that GenAI systems are constitutively epistemically irresponsible. We argue that the expectation on tertiary students to assume responsibility for their so-called ‘tutors’ and ‘collaborators’ is pedagogically perverse, amounting to a demand that students take sole responsibility for the accuracy of claims they are not able to properly assess. Moreover, to the extent that GenAI teaching systems replace students’ interaction with human teachers, it will be increasingly difficult for students to develop the skills and motivation to hold GenAI outputs to disciplinary standards.
- Research Article
5
- 10.1007/s43681-025-00688-7
- Mar 6, 2025
- AI and Ethics
The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.
- Research Article
- 10.1007/s13347-025-00974-6
- Oct 11, 2025
- Philosophy & Technology
As generative AI becomes more deeply integrated into society, building public trust in this technology has emerged as a key challenge for policymakers. Existing approaches, such as the European Commission’s Trustworthy AI framework, largely seek to tackle this issue by offering comprehensive technical and legal measures for promoting a more trustworthy AI industry. However, this paper argues that such approaches are limited in scope and do not fully account for the social complexity of generative AI. As these technologies can now replicate modes of human communication and contribute to our collective knowledge, they cannot be simply considered products to be regulated. Rather, they exist as active social actors and AI policy should reflect this. To better account for this social role, this paper develops a network approach to trust in AI inspired by philosophy of technology and Actor-Network Theory (ANT). This approach argues that trust emerges, first and foremost, from the material interactions between social actors involved in a vast and precarious network. In the context of generative AI, this material network extends far beyond the AI industry to include those various actors that are not directly involved in AI development but that nonetheless influence public trust. As such, this paper argues that the policy goal of establishing trustworthy AI, and thus promoting public trust in AI, is not solely a matter of promoting a more trustworthy AI industry. Rather, to achieve such a goal, more diverse policy solutions need to be devised on the basis of social interactions as part of a whole-of-society approach. Primarily, this paper highlights that public trust in generative AI is influenced by those actors that play a key role in socio-political discourse such as political figures, media organizations, academic institutions and government bodies, among others. As such, public trust in generative AI is linked to trust in our information environment more broadly. To conclude, the paper argues that policymakers seeking to promote trustworthy AI must first seek to combat the current post-truth political crisis and restore public trust in democratic institutions.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.