Assessment Method for Generative AI Technology in Foresight and Policy Design in Public Management: Expanding AI Trustability for Anticipatory Governance

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT Objective: to develop and evaluate a generative AI system prototype and assessment method that supports anticipatory governance by integrating foresight and policy design, enabling stakeholders to anticipate and proactively address emerging challenges in public policy. Methods: the study uses a design science research approach, combining institutional and explainable AI frameworks. It designs and assesses a generative AI prototype through three case scenarios focusing on environmental, electoral, and labor regulations, and expands results to an assessment protocol. Results: the analysis demonstrates the strengths and limitations of generative AI in AG systems. The study produces a systemic framework and an assessment protocol for evaluating AI’s role in augmenting AG capabilities, focusing on enhancing trust and reliability. Conclusions: the article’s main contribution is the proposed assessment protocol that contributes to both theory and practice by providing a replicable method for enhancing trustability in AI-driven AG. The findings support researchers and policymakers in reflecting on and utilizing responsible AI to navigate complex geopolitical, environmental, and societal challenges.

Similar Papers
  • Research Article
  • Cite Count Icon 12
  • 10.25172/smustlr.26.2.4
Generative AI Art: Copyright Infringement and Fair Use
  • Jan 1, 2023
  • SMU Science and Technology Law Review
  • Michael D Murray

The discussion of AI copyright infringement or fair use often skips over all the required steps of the infringement analysis in order to focus on the most intriguing question, “Could a visual generative AI generate a work that potentially infringes a preexisting copyrighted work?” and then the discussion skips further ahead to, “Would the AI have a fair use defense, most likely under the transformative test?” These are relevant questions, but without considering the actual steps of the copyright infringement analysis, the discussion is misleading or even irrelevant. This neglecting of topics and stages of the infringement analysis fails to direct our attention to a properly accused party or entity whose actions prompt the question. Making a sudden transition from a question of infringement in the creation of training datasets to the creation of foundation models that draw from the training data to the actual operation of the generative AI system to produce images makes a false equivalency regarding the processes themselves and the persons responsible for them. The questions ought to shift focus from the persons compiling the training dataset used to train the AI system and the designers and creators of the AI system itself to the end users of the AI system who conceive of and cause the creation of images. The analysis of infringement or fair use in the generative AI context has suffered from widespread misunderstanding concerning the generative AI processes and the control and authorship of the end-user. Claimants, commentators, and regulators have made incorrect assumptions and inaccurate simplifications concerning the process, which I refer to as the Magic File Drawer theory, the Magic Copy Machine theory, and the Magic Box Artist theory. These theories, if they were true, would be much easier to envision and understand than the actual science and technology that goes into the creation and operation of a contemporary visual generative AI system. Throughout this Article, I will attempt to clarify and correct the understanding of the science and technology of the generative AI processes and explain the different roles of the training dataset designers, the generative AI system designers, and the end-users in the rendering of visual works by a generative AI system. Part II will discuss the requirements of a claim of copyright infringement including each step from the copyrightability of the claimant’s work, the doctrines that limit copyrightability, the requirement of an act of copying, and the infringement elements. Part III will summarize the copyright fair use test paying particular attention to the purpose and character of the use analysis, 17 U.S.C. § 107(1), and the current interpretation of the “transformative” test after Andy Warhol Foundation v. Goldsmith, particularly in circumstances relating to technology and the use of copyrighted or copyrightable data sources. Part IV will analyze potential infringement or fair use by the creators of generative AI training datasets. Part V will analyze potential infringement or fair use by the creators of visual generative AI systems. Part VI will analyze potential infringement or fair use by the end-users of visual generative AI systems. For all their complexity, visual generative AI systems are tools that depend on an end-user who conceives of and designs the image and provides the system with a prompt to set the generative process in motion. The end-users are responsible for crafting the prompt or series of prompts used, for evaluating the outputs of the generative AI, for adjusting and editing the iterations of images offered by the AI system, and ultimately for selecting and adopting one of the images generated by the AI as the final image. The end-users then make further decisions about the actual use and its function and purpose for the images the end-users selected and adopted from the outputs of the AI. While working with the AI tool to try to produce a certain image, an end-user might steer the system to produce a work that could, under an infringement analysis, be regarded as potentially infringing, which would lead us again to the fair use analysis based on the end-user’s use of the image.

  • Research Article
  • Cite Count Icon 1
  • 10.12688/mep.20815.1
Is your curriculum GenAI-proof? A method for GenAI impact assessment and a case study
  • Mar 26, 2025
  • MedEdPublish
  • Remco Jongkind + 6 more

Background Generative AI (GenAI) such as ChatGPT can take over tasks that previously could only be done by humans. Although GenAI provides many educational opportunities, it also poses risks such as invalid assessments and irrelevant learning outcomes. This article presents a broadly applicable method to (1) determine current assessment validity, (2) assess which learning outcomes are impacted by student GenAI use and (3) decide whether to alter assessment formats and/or learning outcomes. This is exemplified by the case-study on our medical informatics curriculum. We developed a five-step method to evaluate and address the impact of GenAI. In a collaborative manner, the courses in a curriculum are analysed on their assessment plans and together with the teachers, the courses are adapted to address the impact of GenAI usage. Results 57% of assessments, especially in writing and programming, were at risk of reduced validity and relevance. GenAI impact on assessment validity was closer related to the content and structure of assessments than their complexity according to Bloom’s taxonomy. During educational retreats, lecturers discussed the relevance of impacted learning outcomes and whether students should be able to achieve them with or without GenAI. Furthermore, the results led to a plan to increase GenAI literacy and use over the years of study. Subsequently the coordinators were asked to either adjust either their assessments formats to preclude GenAI use, or to alter the learning outcomes and include GenAI use and literacy. For 64% of the impacted assessments the assessment format was adapted and for 36% the learning outcomes were adapted. Conclusion The majority of assessments in our curriculum were at risk of reduced assessment validity and relevance of learning outcomes, leading us to adapt either the assessments or learning outcomes. This method and case-study offer a potential blueprint for educational institutions facing similar challenges.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.47941/ijf.2210
The Advent of Generative AI and Financial Industry
  • Aug 27, 2024
  • International Journal of Finance
  • Umesh Kumar + 1 more

Purpose: This paper explores the recent literature on Generative AI applications in the financial industry and delineates its role in the future. Methodology: Our paper follows secondary research analyzing current literature on Generative AI in finance. It is one of the essential tools for understanding background information, identifying research problems, and filling the literature gaps. This paper studies how Generative AI has potential financial benefits and risks, providing unique insights into the financial landscape in the coming years. Findings: The findings unveil that Generative AI can become a strategic tool to redefine financial services and operational effectiveness. It can substantially improve the services by reducing costs, bringing efficiency, and enhancing corporate performance. It has the enormous transformative power to revolutionize client product and service offerings, improving risk management assessments and bringing efficiency to operations. However, our study indicates that the financial service industry can get into practices and decisions that are potentially unethical and financial exclusion due to an embedded bias in its algorithm and design of Generative AI technologies. Since Generative AI continues to evolve, its role and effectiveness in decision-making are expected to shape the financial services landscape significantly. Unique Contribution to Theory, Practice, and Policy: Generative AI can be a game changer for the financial industry, fueling digital transformation across industries. The transformative potential of generative AI can optimize operations, revolutionize customer experiences, and drive innovation seamlessly in finance. Our paper suggests how policymakers can foresee the challenges ahead due to the Generative AI in finance services, which is challenging the existing regulatory landscape. To stay ahead in the competition, financial firms must balance data privacy and algorithmic bias and ensure the responsible use of AI.

  • Book Chapter
  • 10.4018/979-8-3693-8939-3.ch003
Case Studies and Applications of Generative AI in Real-World Cybersecurity Scenarios
  • Sep 27, 2024
  • Azeem Khan + 4 more

This chapter elucidates what the major effects of generative artificial intelligence are: It changes things a lot. It first discusses the overall of what generative AI can do using one kind of generative AI, then it considers what generative AI does to make our defences against new things that can go wrong much more effective, and then it reflects on the major effects of generative AI on detecting malware and things that are subtly making our networks vulnerable. It emphasizes generative AI's ability to detect malware that was created to be hard to see with many examples of real things that attempt led to infection. Then it continues to discuss how generative AI interacts with threat intelligence feeds. It brings up that it is a way we can find out about cyber-attacks before they try to intrude. It goes on to relate to meaning how generative AI helps us with behavioural analysis and user authentication. It explains how we can protect privacy when learning about things that can go wrong using networks and threats. It focuses on collaborative learning and differential privacy. The next is to do with how well generative AI systems can handle ‘attacks' – what professionals call adversarial attacks. It wonders if they are scalable for cybersecurity, and lastly it stretches discussion on ethical and legal concerns. In conclusion this chapter suggests Generative AI has potential benefits which can be tapped for better security and privacy of the seamless digital devices connected online apart from it the chapter suggests collaboration among all the stakeholders connected to this network for good using better defense mechanisms which Gen AI can provide against intrusions and anomalies that can infect our networks. At the end this chapter wind up the discussion concluding that this chapter is aimed at specialists working in cybersecurity, researchers, and policymakers.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.techsoc.2024.102758
Theoretical dimensions for integrating research on anticipatory governance, scientific foresight and sustainable S&T public policy design
  • Nov 9, 2024
  • Technology in Society
  • Mateus Panizzon + 1 more

Theoretical dimensions for integrating research on anticipatory governance, scientific foresight and sustainable S&T public policy design

  • Preprint Article
  • 10.31234/osf.io/cne9j_v1
Assessing students’ DRIVE: An evidence-based framework to evaluate learning through students’ interactions with generative AI
  • Jun 26, 2025
  • Manuel J B Oliveira + 4 more

As generative AI (GenAI) transforms how students learn and work, higher education must rethink its assessment strategies. This paper presents a taxonomy and conceptual framework (DRIVE) to evaluate student learning from GenAI interactions (prompting strategies), focusing on cognitive engagement (Directive Reasoning Interaction) and knowledge infusion (Visible Expertise). Despite extensive research mapping student GenAI writing behaviors, practical tools for assessing domain-specific learning remain underexplored. This paper shows how GenAI interactions inform such learning in authentic classroom contexts, moving beyond technical skills or low-stakes assignments. We conducted multi-methods analysis of GenAI interaction annotations (n=1450) from graded essays (n=70) in STEM writing courses. A strong positive correlation was found between high-quality GenAI interactions and final essay scores, validating the feasibility of this assessment approach. Furthermore, our taxonomy revealed distinct interaction profiles: High essay scores correlated with a ”Targeted Improvement Partnership” focused on text refinement, while high interaction scores were linked to a ”Collaborative Intellectual Partnership” centered on idea development. Conversely, below-average performance was associated with ”Basic Information Retrieval” or ”Passive Task Delegation”. These findings demonstrate how the assessment method (output vs. process focus) may shape students’ GenAI usage and learning depth. These findings demonstrate that the assessment method (output vs. process) shapes student AI use. Traditional assessment can reinforce text optimization, while process-focused evaluation may reward the exploratory partnership crucial for deeper learning. The DRIVE framework and related taxonomy offers educators a practical tool to design assessments that capture authentic learning in AI-integrated classrooms.

  • Research Article
  • Cite Count Icon 1
  • 10.34190/icair.4.1.3025
Generative AI and its Impact on Activities and Assessment in Higher Education: Some Recommendations from Master's Students
  • Dec 4, 2024
  • International Conference on AI Research
  • Peter Mozelius

The rapid development of generative AI (GenAI) raises new questions in higher education such as: What should be the university policy regarding GenAI? How ought courses be redesigned for fair and resilient assessment? What the added pedagogical and didactical values when involving GenAI in teaching and learning activities? Different universities have rapidly created and presented contradictory standpoints and draft policies, and teachers show different opinions regarding the pros and cons of GenAI. This study has been carried out with a student perspective, where 16 students have been examining their own Master's programme on sustainable information provision. Students have assessed the assessment in their previous courses in the Master's programme. The aim of the study is to investigate how sustainable course activities and assignment are, and to explore how GenAI tools might support and facilitate teaching and learning activities. Moreover, the students were given the task to test detection software on GenAI generated solutions to assignments in chosen Master's courses. Students conducted these tasks as a part of a 7.5 ECTS project course in the same Master's programme as the investigated courses are a part of. For inspiration and for background information on artificial intelligence to the project work students participated in the first Symposium on AI Opportunities and Challenges (SAIOC) in December 2023. Data have been gathered from reports of 3 group projects where 16 students have investigated 5 freely chosen courses in the programme in each group work. Beside from testing GenAI tools in existing activities and assignments students also interviewed the subject matter experts that are responsible for the chosen courses. Results were firstly analysed and presented in group reports, combined with 16 individual reflection essays. Regarding the individual essays students were instructed to bring up ethical perspectives on GenAI in higher education, and also to present and discuss suggestions for how the current course design and assignments better could be redesigned for improved sustainability and fairness. Finally, all the group reports and the individual reflection essays were thematically analysed by the author, who also is the subject matter expert and main teacher for the project course. Findings show that many of the existing assignments in the Master's programme could be partly solved with different GenAI tools. The AI generated solutions showed different levels of quality and correctness for different types of activities and assignments. An ethical concern that many student essays brought up was the relatively poor quality of the tested detection software. A question in one of the essays was if teachers should use detection software with an accuracy rate just above 50% to evaluate student submissions. The recommendations from both the students and the author are to provide clear instructions about when GenAI is allowed and not in course activities, and to redesign the course structure for continuous assessment. With or without GenAI tools, a continuous assessment where the whole study path through a course is assessed, and not only isolated submissions, would strengthen fairness and sustainability. Finally, several students suggest oral examinations as a complement to the existing assessment methods, even if their findings showed that GenAI tools can be used to prepare oral presentations.

  • Research Article
  • 10.1080/14680777.2024.2434639
Imagination of humanity’s future: representation and comparison of female cyborg images in generative AI paintings
  • Nov 29, 2024
  • Feminist Media Studies
  • Yuchen Viveka Li

This article explores the representation of the female image in the visual artwork of generative AI and the gender issues in it, using eight different generative AI systems, ERNIE-ViLG 2.0, Shuhua, and Yihui from China and Nightcafe, Fotor, Jasper Art, DALLE and Deep dream generator from the West, as comparative cases. In this paper, female cyborg is selected as a representative of female images, using the theoretical framework of posthumanism. This paper first provides research background about generative AI and cyborg. By inputting the same text in these eight generative AI systems, this paper obtains 107 images in different scenarios of basic images, home, work, and fighting. By comparing and analysing these images horizontally and vertically, this paper explores how Chinese and Western generative AI systems perceive female cyborg. In conclusion, by choosing female cyborg as a representation of the female figure, this paper explores how generative AI imagines the future of humanity and how this imagination echoes with the human imagination in a paradoxical way.

  • Single Book
  • 10.62311/nesx/rb-978-81-978755-6-4
The Future of Work: Automation and Employment Trends
  • Aug 30, 2024
  • Murali Krishna Pasupuleti

Abstract: This research monograph develops a rigorous, evidence-led framework for understanding how automation—from industrial robotics to generative AI—reshapes work, wages, and institutions. Moving beyond occupation-level narratives, the book models production as tasks × skills × technologies and traces the channels through which substitution, augmentation, and workflow reconfiguration affect employment quantities, quality, and inclusion. The analysis integrates historical lessons from earlier technology waves with contemporary firm practices (human–AI teaming, hybrid operations, algorithmic management) and policy design (income security, labor law, skills systems, competition and data governance). Empirically, it specifies measurement standards (task taxonomies, evaluation harnesses, replicable identification strategies) and proposes an open metrics stack for post-deployment audits. Cross-regional casework—from East Asian manufacturing and Central/Eastern European SMEs to Indian shared services, Nordic healthcare, Latin American education, and African digital public services—demonstrates multiple pathways to “high-performance, high-inclusion” diffusion. Scenario analysis to 2035 and 2050 quantifies how diffusion speed and institutional adaptability shape outcomes and yields a pragmatic agenda of “no-regret” investments and option-value policies. The result is a conceptual and operational guide for researchers, practitioners, and policymakers seeking to turn technological change into broad-based productivity, resilience, and dignity at work. Keywords future of work, automation, artificial intelligence, robotics, generative AI, task-based model, augmentation, job redesign, algorithmic management, hybrid work, labor markets, wage polarization, productivity, organizational capability, skills and credentialing, lifelong learning, benefits portability, income security, labor law, competition policy, data governance, evaluation and audits, inclusion and equity, international standards, scenario planning

  • Research Article
  • Cite Count Icon 2
  • 10.17803/2713-0533.2024.3.29.415-451
Generative Artificial Intelligence and Legal Frameworks: Identifying Challenges and Proposing Regulatory Reforms
  • Oct 16, 2024
  • Kutafin Law Review
  • A K Sharma + 1 more

This research paper seeks to understand the deficit arising from the generative AI and its potential in redefying various sectors and suggesting modification on the current laws. Generative AI systems can generate distinctive content which could be used in text, images, or music, among others, by training from the available data. It highlights how generative AI influences the legal profession in terms of work like contract writing, as well as how newer language models like GPT-4 and chatbots like ChatGPT and Gemini are evolving. Thus, while generative AI has numerous opportunities, it also raises concerns about ethical issues, authorship and ownership, privacy, and abuses, such as the propagation of deepfakes and fake news. This study focuses attention on the importance of strengthening the legal frameworks to answer the ethical issues and challenges linked to generative AI, such as deepfakes, piracy of contents, discriminative impact, or naked breaches of privacy. It calls for proper and sensitive use of generative AI through regulation, openness, and commonly agreed global guidelines. This paper emphasizes that innovations need to be balanced by a set of effective regulations to unleash the potential of generative AI and minimize potential threats.

  • Book Chapter
  • 10.4018/979-8-3693-1351-0.ch018
Transforming Education
  • Feb 7, 2024
  • Andreia Bem Machado + 4 more

Generative AI systems are increasingly present in our daily lives, helping us make crucial decisions. They use machine learning algorithms and tools, fed with millions of data collected from the web, producing entirely new information and generating variations. And this is not just limited to texts — it can produce images, audio, videos, even code, or new programming languages. There are several fields where generative AI can have a considerable impact in the coming years. In this context, the issues proposed in this chapter are: What is generative AI? What is prompt engineering? How to transform education using generative AI and prompt engineering in creating synthetic content? To respond to the research problem, the following objective will be achieved: Investigate how to transform education using generative AI and prompt engineering in the creation of synthetic content. It is concluded that generative AI tools can also help create more efficient exercises. Teachers and educators can use technology to create instructional materials and present summaries of concepts.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 30
  • 10.3390/info15110697
Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review
  • Nov 4, 2024
  • Information
  • Georgios Feretzakis + 3 more

Generative AI, including large language models (LLMs), has transformed the paradigm of data generation and creative content, but this progress raises critical privacy concerns, especially when models are trained on sensitive data. This review provides a comprehensive overview of privacy-preserving techniques aimed at safeguarding data privacy in generative AI, such as differential privacy (DP), federated learning (FL), homomorphic encryption (HE), and secure multi-party computation (SMPC). These techniques mitigate risks like model inversion, data leakage, and membership inference attacks, which are particularly relevant to LLMs. Additionally, the review explores emerging solutions, including privacy-enhancing technologies and post-quantum cryptography, as future directions for enhancing privacy in generative AI systems. Recognizing that achieving absolute privacy is mathematically impossible, the review emphasizes the necessity of aligning technical safeguards with legal and regulatory frameworks to ensure compliance with data protection laws. By discussing the ethical and legal implications of privacy risks in generative AI, the review underscores the need for a balanced approach that considers performance, scalability, and privacy preservation. The findings highlight the need for ongoing research and innovation to develop privacy-preserving techniques that keep pace with the scaling of generative AI, especially in large language models, while adhering to regulatory and ethical standards.

  • Research Article
  • 10.1093/grurint/ikae082
Creation Is Not Like a Box of Chocolates: Why Is the First Judgment Recognizing Copyrightability of AI-Generated Content Wrong?
  • Jul 5, 2024
  • GRUR International
  • Qian Wang

The judgment of the Beijing Internet Court recognizing copyrightability of AI-generated images is flawed for three reasons. First, the judgment treats generative AI as a tool of creation akin to a brush, camera or Photoshop. But generative AI is not a passive means for the author to implement the act of creation that directly produce works; instead, it is actively involved in the decision-making process of the substance of the resulting content. Second, the judgment attaches much importance to the creative nature of the text prompts and other inputs of the user of generative AI, while it fails to make the analysis within the framework of the idea/expression dichotomy. Different generative AI systems, and even the same generative AI, may generate completely different images based on exactly the same ‘user’s inputs’. This fact shows that ‘user’s inputs’ are an unprotectable idea in relation to the outcome of the AI production, because a single creative and original idea may lead to a large number of expressions. Third, while acknowledging that the relationship between generative AI and its users is akin to the relationship between the commissioned party and the commissioner during the creation of a painting, the judgment wrongly attributes user’s authorship of AI-generated content to AI’s lack of free will and legal personality.

  • Research Article
  • 10.7759/cureus.81313
Effects of Introducing Generative AI in Rehabilitation Clinical Documentation.
  • Mar 27, 2025
  • Cureus
  • Kyohei Omon + 4 more

Introduction Healthcare professionals reportedly spend a significant proportion of their working hours on documentation. Therefore, we developed a generative AI solution specialized in creating clinical documentation for rehabilitation. This study aimed to examine the impact of generative AI on clinical documentation tasks. Methods Twelve rehabilitation professionals (physical therapists, occupational therapists, and speech-language pathologists) participated in this study. We compared conventional clinical documentation (Period A) with clinical documentation using a generative AI system (Period B). Measures taken for both periods included time required to complete the clinical documentation (documentation time), workload assessed using the National Aeronautics and Space Administration Task Load Index (NASA-TLX), and quality of the clinical documentation. Between-group comparisons of these measurements were performed. Additionally, we recorded the number of non-conversational voice memos (voice data inputs) in Period B. After the study, we assessed the participants' willingness to adopt generative AI (implementation intent) on a five-point scale. For statistical analysis, we compared documentation time, NASA-TLX scores, and documentation quality between the two periods. Time saved was determined by subtracting the documentation time of Period B from that of Period A, and a correlation analysis between the number of voice memos (voice data input) and the willingness to adopt the technology was conducted. Analyses were performed using R version 4.2.3(R Core Team, Durham, NC), with the level of significance set at 0.05. Results No significant difference was observed in the time required to prepare clinical documentation between Periods A and B. However, in Period B, the NASA-TLX time pressure score was significantly lower, while the quality of clinical documentation was significantly higher. Additionally, a strong positive correlation was observed between the reduction in documentation time and the number of voice memos (r = 0.71, p < 0.01), as well as a significant positive correlation with the willingness to adopt the system (r = 0.67, p < 0.05) during clinical documentation in Period B. Conclusion Our findings indicate that using generative AI for clinical documentation tasks can reduce time pressure and improve documentation quality. Moreover, the reduction in documentation time was associated with the frequency of voice memos and the degree of participants' willingness to adopt the system. These results suggest that, to achieve further reductions in workload and costs, considering the motivation and cooperative framework of healthcare professionals when introducing generative AI solutions is essential.

  • Book Chapter
  • Cite Count Icon 3
  • 10.4018/979-8-3693-1565-1.ch010
Navigating the Legal and Ethical Framework for Generative AI
  • Apr 19, 2024
  • Anuttama Ghose + 2 more

Generative AI systems have given incredible ability to independently produce a wide variety of content types, including textual, visual, and more. Complex issues with copyright protection and intellectual property rights have arisen as a result of this change. With a focus on fostering responsible global governance, this research delves into the complex legal and ethical considerations underlying Generative AI. The goal of this chapter is to take a look at the complicated legal issues that come up because of Generative AI's ability to generate material on its own. This chapter analyzes the current legal documents, legislation, and international treaties, focusing on ethical concerns. Ultimately, the authors want to have a positive impact on efforts to build responsible and efficient international frameworks for regulating Generative AI. This study provides an exhaustive case for the implementation of legal frameworks that can efficiently tackle the intricate legal and ethical quandaries posed by Generative AI, while simultaneously encouraging the progress of innovation and creativity.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.