Finding Equilibrium: An Integrative Approach to Balancing Human and Artificial Intelligence in Legal Research
The legal profession is changing as generative artificial intelligence (AI) tools become embedded in legal research and practice. While AI tools promise increased efficiency, they also pose risks to cognitive development, including overload, overreliance, decision fatigue, and the erosion of metacognitive habits. This article applies cognitive psychology frameworks to examine how AI-assisted research affects legal reasoning and learning. It introduces the concept of intelligence equilibrium, a state in which AI complements rather than replaces human analysis. Drawing on these frameworks, the article proposes instructional strategies to help educators, particularly law librarians, support students in developing adaptable, critical, and ethically grounded AI-augmented research practices. By focusing on how technology interacts with cognitive processes, this article reframes legal research instruction around sustaining critical legal reasoning skills.
- Research Article
28
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).
- Research Article
7
- 10.9734/ajrcos/2024/v17i7491
- Jul 30, 2024
- Asian Journal of Research in Computer Science
With the increasing use of Generative Artificial Intelligence (AI) tools like ChatGPT and Bard, universities face challenges in maintaining academic integrity. This research investigates the impact of these tools on learning outcomes (factual knowledge, comprehension, critical thinking) in selected universities of Ghana's Upper East Region during the 2023-2024 academic year. The study specifically analyzes changes in student comprehension and academic integrity concerns when using Generative AI for content generation, research assistance, and summarizing complex topics. A mixed-methods approach was employed, combining qualitative data from interviews and open-ended questions with quantitative analysis of survey data and academic records. The research focuses on three institutions: C. K. Tedam University of Technology and Applied Sciences, Bolgatanga Technical University, and Regentropfen University College. A purposive sampling technique recruited 150 participants (50 from each university) who had used Generative AI tools. Key findings show that 72% of students reported improved understanding of course material through Generative AI use, yet 75% cited academic integrity as a primary concern. Quantitative analysis revealed a weak to moderate positive correlation (r = 0.45) between AI tool usage and improved grades, with variations depending on the specific AI tasks performed. Qualitative data highlighted concerns about overreliance on AI and its impact on critical thinking skills. This research contributes to the ongoing debate on AI's role in education by providing valuable insights for educators and policymakers worldwide. The findings suggest that while AI tools can enhance comprehension, ethical considerations and potential drawbacks related to critical thinking require careful attention. The study concludes with recommendations for integrating AI literacy programs, developing ethical guidelines, and implementing advanced plagiarism detection systems to harness the benefits of Generative AI while mitigating risks to academic integrity. Although specific to the Upper East Region of Ghana, these insights may be applicable to other educational systems with similar characteristics.
- Research Article
- 10.55041/ijsrem30862
- Apr 17, 2024
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Human beings are endowed with a natural curiosity and creativity, which motivate them to learn new things from their interactions with the world. Human learning has involved exploration and experimentation, which have allowed humans to discover new facts and principles, and to invent new artifacts and systems. Human learning has also affected human evolution, both genetically and culturally, as humans have adjusted to different situations and demands in their environments. However, in the current world, human learning is largely facilitated by artificial intelligence (AI) tools, which are programs that can perform tasks that usually require human intelligence, such as comprehension, reasoning, problem-solving, and communication. AI tools can support humans in their learning endeavors, by giving them access to enormous amounts of information, and by delivering them customized and interactive assistance and feedback. AI tools can also amplify human creativity and innovation, by generating novel and diverse content, such as code, poems, essays, songs, and more. But what are the effects of this dependence on AI tools for human learning and evolution? Does it boost or diminish human curiosity and creativity? Does it enable or limit human autonomy and agency? Does it foster or hamper human diversity and collaboration? These are some of the questions that this topic will explore, by evaluating the pros and cons of using AI tools for human learning, and the ethical and social issues that arise from this phenomenon. [28] Today when we look around us we observe the advancement in technology has brought a lot of comfort to our lives in terms of traveling, education, or enjoying content virtually. [29] Talking about our basic requirements, technology has become so friendly that we can learn everything through E-Learning. Everyone only wondered about having an AI which will help in making our lives easy. The latest concept in terms of AI which is widely received and accepted by the people everywhere around the Globe is the Open AI that is Chat Gpt, Gemini, Copilot. All of these AI helps us in decision making or cutting our chase short for finding solutions for either lengthy solutions like writing a summary related to something or Questions which are easy to solve but difficult to look for solutions. About a quarter (27%) of Americans say they interact with artificial intelligence almost constantly or several times a day. Artificial intelligence (AI) is used in a variety of ways, including online product recommendations, facial recognition software and chatbots. One in six (17%) adults reported that they can often or always recognise when they are using AI, one in two (50%) adults reported that they can some of the time or occasionally recognise when they are using AI, one in three (33%) adults reported that they can hardly ever or never recognise when they are using AI. [26] In this project we are testing the dependence upon the recently emerged Open AI tools such as ChatGPT, Google Bard, Bing. Our motive is to find out whether people are using these powerful tools to help in their academics or other tasks only or do they take advice from these tools in their financial planning as well.
- Research Article
- 10.6087/kcse.352
- Feb 5, 2025
- Science Editing
Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
- 10.34190/ecie.19.1.2468
- Sep 20, 2024
- European Conference on Innovation and Entrepreneurship
Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.
- Research Article
- 10.1007/s44163-025-00316-7
- May 30, 2025
- Discover Artificial Intelligence
The use of Generative Artificial Intelligence (AI) tools in international commercial arbitration reveals a complex intersection with the potential risk of confidential data breaches. Adopting a doctrinal research approach, this research article analyses the legal and regulatory framework applicable to ensure responsible and ethical uses of AI so as to protect confidentiality in international arbitration. This article argues that the use of AI in international arbitration has brought in a new age of efficiency and accuracy in international arbitration, but it also raises concerns on the protection of confidentiality as third-party owned AI tools and systems are prone to a potential risk of confidential data breaches and confidentiality violations on volumes of data stored together in AI tools. The application of the guidelines and principles on the use of AI in international arbitration as well as emerging regulations and laws on AI have varied approaches that are either discretionary or only play a guiding role on the protection of confidential information in international arbitration. Ultimately, this article recommends that it is imperative for the upcoming versions of institutional arbitration rules to enhance the confidentiality obligations in arbitration proceedings with a focus on the integration of AI tools. Alternatively, with the use of confidentiality orders, arbitration participants must ensure that appropriate safeguards are in place to ensure that confidentiality is a core consideration from the initial stages of deploying AI tools. Confidentiality by design could also be applied in Generative AIs used by law firms, arbitral tribunals or institutions.
- Conference Article
1
- 10.54941/ahfe1004957
- Jan 1, 2024
In the dynamic field of programming education, integrating artificial intelligence (AI) tools has started to play a significant role in enhancing learning experiences. This paper presents a case study conducted during a foundational programming course for first-year students in higher education, where students were encouraged to utilize generative artificial intelligence programming copilot extensions in their programming IDE and browser-based generative AI tools as supportive AI tools. The primary objective was to observe the impact of AI on the learning curve and the overall educational experience.Key findings suggest that the introduction of AI tools significantly altered the learning experience for students. Many who initially struggled with grasping elementary programming concepts found that AI support made understanding basic programming concepts much easier, enhancing their confidence and skills. This was particularly evident in the reduced levels of anxiety typically associated with early programming learning, as the AI copilot provided a non-judgmental, always-available source for clarifying doubts, including queries that students might hesitate to ask in a traditional classroom setting.Notably, some students leveraged the AI to generate similar exercise problems, reinforcing their understanding and skills. The AI's capability to address basic queries also freed up the instructor's time, allowing for more personalized student guidance in more advanced problems. This shift in the instructional dynamic further contributed to a learning environment where students felt more comfortable engaging with complex topics, thereby reducing the psychological barriers often linked with early-stage programming education.The course's structure, enriched by AI, enabled students to delve into more complex programming constructs earlier than traditional curricula would allow. For instance, students were tasked with simulating basic e-commerce operations, such as user registration, product browsing, and cart functionalities. These practical challenges naturally introduced advanced concepts like external data storage, unit testing, and user interface design, which are typically reserved for more advanced courses. With the help of generative AI programming copilot tools, students at any programming skill level were able to develop nearly functional complex structures. Interestingly, even when their projects were not fully functional, students remained motivated. Instead of feeling discouraged by these imperfect outcomes, they showed resilience and a keen interest in understanding and improving their code. This reaction is a significant shift from traditional learning settings, where unfinished or flawed projects often lead to increased anxiety or a drop in motivation.Furthermore, the AI's proactive suggestions inspired students to explore beyond the curriculum. Advanced learners delved into databases, cryptography libraries in Python, and even more advanced user interface design, ensuring that they remained engaged and challenged. This elementary course, enhanced by generative AI tools, also inspired students to learn other programming languages since they now learned that individual learning is more available with the aid of generative AI.In conclusion, the integration of AI in programming education offers a promising avenue for enhancing both the learning experience and outcomes. This case study underscores the potential of AI to revolutionize traditional teaching methodologies, fostering a more dynamic, responsive, and inclusive learning environment.This paper handles the results, possibilities and challenges of AI empowered education in programming. It also gives practical examples as well as future research perspectives.
- Research Article
- 10.20853/39-3-6272
- Jan 1, 2025
- South African Journal of Higher Education
Generative artificial intelligence (AI) tools have sparked debates in the education sector prompting researchers to explore their desirability and potential in education. This paper acknowledges generative AI’s potential to support the delivery of teaching, learning and research in the higher education emphasising its ability to improve student writing quality as well as academic productivity, success rate, and independence. However, responsible use of these AI tools to support research is also crucial. Furthermore, the challenges associate with AI tool use, especially accessibility and usage in the African context, are recognised. For instance, ethical challenges relating to (mis)use of AI because no or inadequate policy regulations have been implemented. In addition, there are technical and structural challenges relating to connectivity, power outages, device access and technical know-how. Therefore, this paper aims to identify the opportunities and challenges associated with using AI tools to support research in African Higher Education classrooms. For the study, a qualitative systematic literature review was applied to two articles using thematic analysis from a final selection of 29 articles. Findings indicated that generative AI tools could enhance student writing skills and increase productivity. Additionally, they could lead to research autonomy, improved writing proficiency, quality, and academic throughput. Shortcomings included AI misuse, knowledge deficiencies, and infrastructural challenges preventing AI access. Additionally, inadequate regulations relating to using generative AI tools for learning and teaching were a further challenge. It is essential to address the ethical concerns, invest in skills development and promote equitable digital access, especially in Africa, where this is limited. In addition, the capability approach revealed how the digital divide limited the adoption of generative AI tools in Africa.
- Research Article
- 10.1108/ics-04-2025-0164
- Aug 12, 2025
- Information & Computer Security
Purpose The persistent shortage of cybersecurity professionals, coupled with the consistent increase and complexity of cyberattacks, requires a novel examination of the processes and tasks performed by those professionals to cope with the workloads. Research shows that artificial intelligence (AI) tools often target technical rather than managerial tasks, highlighting the need for continued human involvement in cybersecurity management. This study aims to examine how using Generative AI (GenAI) for cybersecurity managerial tasks has the potential to assist in reducing human errors and perform repetitive tasks, thus, reducing the cybersecurity managerial loads, allowing them to focus on more strategic aspects of their work. Design/methodology/approach This experimental research study used five GenAI platforms: ChatGPT, CoPilot, Gemini, MetaAI and Claude. Each GenAI platform generated a real-life scenario and guidelines for cybersecurity managers associated with a managerial cybersecurity task and then cross-evaluated the scenarios and guidelines following predetermined metric measures of (1) relevancy, (2) accuracy and reliability, (3) completeness as well as (4) clarity. Scores were generated by each of the five GenAI platforms based on the four metric measures and ranged from 1 = very low to 10 = very high. Then the scores were averaged across all measures and all five GenAI platforms for an overall metrics score ranging from 1 to 10. Analysis of variance was conducted to test for mean differences. Findings The experimental results indicated that there was a statistically significant mean differences in the scores received between all scenarios (F = 7.841, df = 4, p < 0.001). Specifically, the scenario generated by Claude achieved the highest overall average score (9.3), followed by Gemini (9.0), MetaAI (8.9), ChatGPT (8.7) and CoPilot (8.5). In general, the scenario generated by Claude consistently performed well across all metrics by all five GenAI platforms. Practical implications The rapid integration of GenAI capabilities into everyday activity may suggest that cybersecurity managers must currently be trained to use AI tools in their daily operations to alleviate their workloads. Having said that, some ethical issues and risks of using GenAI for cybersecurity managerial tasks must be further studied. Social implications The cybersecurity workforce shortage was reported to exceed 4 million workers worldwide in 2024 and is estimated to exceed 5 million by the end of 2025. Thus, it is significant to further understand the role of AI in improving the efficiency of managerial cybersecurity tasks. Originality/value The value of this research lies in three facets. First, the demonstration of using GenAI to perform managerial cybersecurity tasks. Second, the novel methodology, in which the GenAI platforms assess the outputs by self- and cross-evaluating them. Finally, the development of novel metrics to assess managerial tasks can be of great value for researchers and industry.
- Research Article
22
- 10.62411/jcta.9447
- Feb 26, 2024
- Journal of Computing Theories and Applications
Generative artificial intelligence tools have recently attracted a great deal of attention. This is because of their huge advantages, which include ease of usage, quick generation of answers to requests, and the human-like intelligence they possess. This paper presents a vivid comparative analysis of the top 9 generative artificial intelligence (AI) tools, namely ChatGPT, Perplexity AI, YouChat, ChatSonic, Google's Bard, Microsoft Bing Assistant, HuggingChat, Jasper AI, and Quora's Poe, paying attention to the Pros and Cons each of the AI tools presents. This comparative analysis shows that the generative AI tools have several Pros that outweigh the Cons. Further, we explore the transformative impact of generative AI in Natural Language Processing (NLP), focusing on its integration with search engines, privacy concerns, and ethical implications. A comparative analysis categorizes generative AI tools based on popularity and evaluates challenges in development, including data limitations and computational costs. The study highlights ethical considerations such as technology misuse and regulatory challenges. Additionally, we delved into AI Planning techniques in NLP, covering classical planning, probabilistic planning, hierarchical planning, temporal planning, knowledge-driven planning, and neural planning models. These planning approaches are vital in achieving specific goals in NLP tasks. In conclusion, we provide a concise overview of the current state of generative AI, including its challenges, ethical considerations, and potential applications, contributing to the academic discourse on human-computer interaction.
- Research Article
6
- 10.1057/s41599-024-03968-5
- Jan 6, 2025
- Humanities and Social Sciences Communications
This study aims to understand how widely used Artificial Intelligence (AI) tools reflect the cultural context through the built environment. This research explores how outputs obtained with ChatGPT-4o, Midjourney’s bot on Discord and Google Maps represent the cultural context of Stockholm, Sweden. Cultural context is important because it shapes people’s identity, behaviour, and power dynamics. AI-generated recommendations and images of Stockholm’s cultural context were compared with real photographs, GIS demographic data and socio-economic information about the city. Results show how outputs written with ChatGPT-4o mostly listed museums and other venues popular among visitors, while Midjourney’s bot mostly represented cafes, streets, and furniture, reflecting a cultural context heavily shaped by buildings, consumption and commercial interests. Google Maps shows commercial sites while also enabling users to directly add information about places, like opinions, photographs and the main features of a business. These AI perspectives on cultural context can broaden the understanding of the urban environment and facilitate a deeper insight into the prevailing ideas behind the data that train these algorithms. Results suggest that the generative AI systems analysed convey a narrow view of the cultural context, prioritising buildings and a sense of cultural context that is curated, exhibited and commercialised. Generative AI tools could jeopardise cultural diversity by prioritising some ideas and places as “cultural”, exacerbating power relationships and even aggravating segregation. Consequently, public institutions should promote further discussion and research on AI tools, and help users combine AI tools with other forms of knowledge. The providers of AI systems should ensure more inclusivity in AI training data, facilitate users’ writing of prompts and disclose the limitations of their data sources. Despite the current potential reduction of diversity of the cultural context, AI providers have a unique opportunity to produce more nuanced outputs, which promote more societal diversity and equality.
- Research Article
- 10.1002/pra2.1406
- Oct 1, 2025
- Proceedings of the Association for Information Science and Technology
ABSTRACTThe intersection of technology and the legal profession has evolved significantly, with legal practitioners using tools like Westlaw, LexisNexis, and Google Search for legal research. More recently, artificial intelligence (AI) tools, such as ChatGPT, have been integrated into legal practice, offering both promise and challenges. In particular, incidents like the Mata v. Avianca case, where attorneys faced sanctions for submitting fictitious citations generated by ChatGPT, highlight the risks of relying on AI in legal work. While courts have issued guidelines for the responsible use of AI, there remains a clear need for due diligence and a thorough understanding of its limitations. This study evaluates the effectiveness of AI tools in legal research by presenting two complex legal questions, one involving U.S. asylum law and the other concerning Colombian legal principles. The responses from ChatGPT and Google's AI were analyzed for accuracy and consistency, revealing significant gaps in both tools' performance, particularly in providing complete and current legal information. While they can assist in research, they are not yet reliable enough to replace expert legal analysis. The findings suggest that legal professionals should approach AI‐generated legal information with caution and verify results with human expertise.
- Research Article
8
- 10.1177/02734753241302459
- Dec 23, 2024
- Journal of Marketing Education
The integration of generative artificial intelligence (AI) tools is a paradigm shift in enhanced learning methodologies and assessment techniques. This study explores the adoption of generative AI tools in higher education assessments by examining the perceptions of 353 students through a survey and 17 in-depth interviews. Anchored in the Unified Theory of Acceptance and Use of Technology (UTAUT), this study investigates the roles of perceived risk and tech-savviness in the use of AI tools. Perceived risk emerged as a significant deterrent, while trust and tech-savviness were pivotal in shaping student engagement with AI. Tech-savviness not only influenced adoption but also moderated the effect of performance expectancy on AI use. These insights extend UTAUT’s application, highlighting the importance of considering perceived risks and individual proficiency with technology. The findings suggest educators and policymakers need to tailor AI integration strategies to accommodate students’ personal characteristics and diverse needs, harnessing generative AI’s opportunities and mitigating its challenges.
- Research Article
3
- 10.1108/lhtn-08-2024-0131
- Sep 17, 2024
- Library Hi Tech News
PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.
- Research Article
- 10.1080/0270319x.2025.2534230
- Jul 25, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2534228
- Jul 24, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2536920
- Jul 24, 2025
- Legal Reference Services Quarterly
- Front Matter
- 10.1080/0270319x.2025.2536918
- Jul 23, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2534229
- Jul 19, 2025
- Legal Reference Services Quarterly
- Front Matter
- 10.1080/0270319x.2025.2497190
- Apr 3, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2495979
- Apr 3, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2488092
- Apr 3, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2491280
- Apr 3, 2025
- Legal Reference Services Quarterly
- Research Article
- 10.1080/0270319x.2025.2452717
- Jan 2, 2025
- Legal Reference Services Quarterly
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.