Language and Generative AI: A New Paradigm of Organizational Research
Abstract Language is not merely a medium of communication but a constitutive force in organization management. Three decades after the first “linguistic turn” in organization studies, generative artificial intelligence (GenAI) and large language models (LLMs) are provoking a second, data -intensive turn that reconfigures the relationship between language, technology, and management. LLMs now operate as discursive actors that simulate, generate, and transform organizational communication. This paper advances algorithmic discourse research as a new paradigm for studying language in organizations. It reframes metthodological rigor as pluralistic and reflexive, combining computational scale with interpretive depth. It retains traditional standards of evidence while extending them to encompass ethical and contextual reflexivity, acknowledging that meaning, data, and validity are co -constructed. An integrated multilevel framework links micro -linguistic forms (lexical, metaphorical, modal), meso -level routines and narratives, and macro -level outcomes such as innovation, trust, and performance. The new paradigm expands the methodological and epistemological foundations of organizational research by positioning language as both data and process, and LLMs as analytic partners in the study of sensemaking. In doing so, it marks a shift from observing discourse to co -engaging with algorithmic language, opening new avenues for understanding how organizations think, communicate, and act in the age of AI.
- Research Article
11
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Research Article
- 10.55632/pwvas.v96i1.1063
- Apr 18, 2024
- Proceedings of the West Virginia Academy of Science
CAMERON VU, Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and DARIA PANOVA, Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and JOSIAH KOWALSKI, Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and Dr. W. LIAO (Faculty Advisor), Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and Dr. O. Guzide (Faculty Advisor), Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443. Smart Parking Space Detection with Generative Artificial Intelligence and Large Language Models. The increasing relevance of generative AI and large language models is reshaping various sectors of modern society. These advancements have spurred notable progress in fields such as healthcare, finance, and education. Yet, the application of AI extends beyond expert domains, offering simplified solutions to everyday tasks for the general populace. This project harnesses the power of generative artificial intelligence and large language models to develop a practical application: smart parking space detection. By leveraging these technologies, individuals can effortlessly ascertain the availability of parking spots in monitored lots via camera or photographic monitoring, facilitated by a straightforward algorithm. The overarching objective is twofold: to engineer a user-friendly system utilizing generative AI principles and to demonstrate the potential for such technologies to enhance the daily experiences of ordinary individuals.
- Research Article
- 10.55041/isjem03936
- Jun 3, 2025
- International Scientific Journal of Engineering and Management
The upcoming of the large language models and generative artificial intelligence had Completely change the way in which we generate and understand language, and also start the beginning of a new phase in AI-driven applications. This review paper over see the advancements and changes that have occurred over time, providing a thorough assessment of generative artificial intelligence and large language models, while we also look upon their impactful potential across different areas. The first section of the research focuses on the changes of extensive language models and generative AI, and we will try to focus upon developments in models like GPT-4 and others. These models have shown their ability number of times from applications in various sectors, from automated content generation to acurate conversational agents. They are characterized by their capability to produce text that is both coherent and contextually appropriate. However, despite their accuracy, strengths, generative artificial intelligence and large language models face critical ethical, technological, and societal issues. Some main stream concern arises from the biases present in the training data, which can cause and lead to social inequalities.Here we looks into the causes of these biases and their implications, stressing the need for comprehensive frameworks to identify and mitigate them. Keywords: backpropagation, bert, diffusion models, explainable ai (xai), generative ai, image synthesis, long short-term memory (lstm), natural language processing (nlp), neural network, recurrent neural network (rnn), small language model (sml), and transformer model.
- Research Article
- 10.55041/isjem03927
- Jun 3, 2025
- International Scientific Journal of Engineering and Management
The upcoming of the large language models and generative artificial intelligence had Completely change the way in which we generate and understand language, and also start the beginning of a new phase in AI-driven applications. This review paper over see the advancements and changes that have occurred over time, providing a thorough assessment of generative artificial intelligence and large language models, while we also look upon their impactful potential across different areas. The first section of the research focuses on the changes of extensive language models and generative AI, and we will try to focus upon developments in models like GPT-4 and others. These models have shown their ability number of times from applications in various sectors, from automated content generation to acurate conversational agents. They are characterized by their capability to produce text that is both coherent and contextually appropriate. However, despite their accuracy, strengths, generative artificial intelligence and large language models face critical ethical, technological, and societal issues. Some main stream concern arises from the biases present in the training data, which can cause and lead to social inequalities.Here we looks into the causes of these biases and their implications, stressing the need for comprehensive frameworks to identify and mitigate them. Keywords: backpropagation, bert, diffusion models, explainable ai (xai), generative ai, image synthesis, long short-term memory (lstm), natural language processing (nlp), neural network, recurrent neural network (rnn), small language model (sml), and transformer model.
- Research Article
4
- 10.1016/j.compbiolchem.2025.108611
- Feb 1, 2026
- Computational biology and chemistry
Generative artificial intelligence and large language models in smart healthcare applications: Current status and future perspectives.
- Conference Article
3
- 10.2118/222046-ms
- Nov 4, 2024
In today's dynamic and competitive oil and gas industry, the integration of Artificial Intelligence (AI) has emerged as a game-changer, offering unparalleled opportunities for optimization, cost reduction, and operational excellence. The main objective of autonomous operations is to minimize manual interactions and maximize self-directed plant operations. ADNOC Onshore has implemented generative AI agents in daily maintenance and production operations to boost workforce productivity in the journey of achieving autonomous operations. This paper explains the use cases, challenges, AI architecture & data security in deployment. Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. GPT-4 Turbo is a large multimodal model (accepting text or image inputs and generating text) that can solve difficult problems with greater accuracy and advanced reasoning capabilities. The scope includes empowering reliability, maintenance, and operations professionals to draw insights from equipment manuals, asset operating manuals and operating procedures, maintenance records, and safety & integrity manuals. This in-house solution with support across structured and unstructured data, an LLM-agnostic architecture, deterministic responses with source references, and granular access controls. The solution has been integrated ERP SAP system and sensor time series PI system, data historians for integrated context. A unique automated contextualization engine has been used based on oil and gas specific vocabulary to bring context to their operations. A conversational interactive agent has been built for user interactions. The maintenance and operations engineer can receive suggestions on the proper steps to identify the root cause based on OEM product manuals, previous events, and current performance. This Generative AI solution accelerates time to insight for operators by equipping teams to streamline maintenance operations and Investigate maintenance records with generative AI to troubleshoot operations challenges more efficiently. The internal study showed that operational productivity has increased by 20% after this solution's implementation. For the model to understand industrial environments, it would require retraining the model on industrial data. Using existing models on uncontextualized, unstructured industrial data significantly increases the risk of incorrect and untrustworthy answers – referred to as AI hallucinations. Another significant challenge lies in the dependence on the quality and quantity of available data for training. AI models require extensive and representative datasets to produce accurate and reliable predictions. Large language models are a type of artificial intelligence (AI) model designed to understand and generate human language. These models are built upon deep learning architectures, particularly transformer architectures. Generative AI can play a significant role in oil and gas asset operations towards the goal of achieving autonomous operations.
- Research Article
24
- 10.1038/s41746-025-01565-7
- Mar 28, 2025
- npj Digital Medicine
Medication-related harm has a significant impact on global healthcare costs and patient outcomes. Generative artificial intelligence (GenAI) and large language models (LLM) have emerged as a promising tool in mitigating risks of medication-related harm. This review evaluates the scope and effectiveness of GenAI and LLM in reducing medication-related harm. We screened 4 databases for literature published from 1st January 2012 to 15th October 2024. A total of 3988 articles were identified, and 30 met the criteria for inclusion into the final review. Generative AI and LLMs were applied in three key applications: drug-drug interaction identification and prediction, clinical decision support, and pharmacovigilance. While the performance and utility of these models varied, they generally showed promise in early identification, classification of adverse drug events, and supporting decision-making for medication management. However, no studies tested these models prospectively, suggesting a need for further investigation into integration and real-world application.
- Research Article
- 10.2196/75452
- Jan 12, 2026
- JMIR Medical Education
BackgroundIn recent years, generative artificial intelligence and large language models (LLMs) have rapidly advanced, offering significant potential to transform medical education. Several studies have evaluated the performance of chatbots on multiple-choice medical examinations.ObjectiveThe study aims to assess the performance of two LLMs—GPT-4o and OpenAI o1—on the Médico Interno Residente (MIR) 2024 examination, the Spanish national medical test that determines eligibility for competitive medical specialist training positions.MethodsA total of 176 questions from the MIR 2024 examination were analyzed. Each question was presented individually to the chatbots to ensure independence and prevent memory retention bias. No additional prompts were introduced to minimize potential bias. For each LLM, response consistency under verification prompting was assessed by systematically asking, “Are you sure?” after each response. Accuracy was defined as the percentage of correct responses compared to the official answers provided by the Spanish Ministry of Health. It was assessed for GPT-4o, OpenAI o1, and, as a benchmark, for a consensus of medical specialists and for the average MIR candidate. Subanalyses included performance across different medical subjects, question difficulty (quintiles based on the percentage of examinees correctly answering each question), and question types (clinical cases vs theoretical questions; positive vs negative questions).ResultsOverall accuracy was 89.8% (158/176) for GPT-4o and 90% (160/176) after verification prompting, 92.6% (163/176) for OpenAI o1 and 93.2% (164/176) after verification prompting, 94.3% (166/176) for the consensus of medical specialists, and 56.6% (100/176) for the average MIR candidate. Both LLMs and the consensus of medical specialists outperformed the average MIR candidate across all 20 medical subjects analyzed, with ≥80% LLMs’ accuracy in most domains. A performance gradient was observed: LLMs’ accuracy gradually declined as question difficulty increased. Slightly higher accuracy was observed for clinical cases compared to theoretical questions, as well as for positive questions compared to negative ones. Both models demonstrated high response consistency, with near-perfect agreement between initial responses and those after the verification prompting.ConclusionsThese findings highlight the excellent performance of GPT-4o and OpenAI o1 on the MIR 2024 examination, demonstrating consistent accuracy across medical subjects and question types. The integration of LLMs into medical education presents promising opportunities and is likely to reshape how students prepare for licensing examinations and change our understanding of medical education. Further research should explore how the wording, language, prompting techniques, and image-based questions can influence LLMs’ accuracy, as well as evaluate the performance of emerging artificial intelligence models in similar assessments.
- Conference Article
2
- 10.2118/221883-ms
- Nov 4, 2024
In dynamic landscape of oil and gas drilling, Generative Artificial Intelligence (Generative AI) emerges as the indispensable ally, leveraging historical drilling data to revolutionize operational efficiency, mitigate risks, and empower informed decision-making. Existing Generative AI methods and tools, such as Large Language Models (LLMs) and agents, require tuning and customization to the oil and gas drilling sector. Applying Generative AI in drilling confronts hurdles such as ensuring data quality and navigating the complexity of operations. A methodology integrating Generative AI into drilling demands is comprehensive and interdisciplinary. Agile strategy revolves around constructing a network of specialized agents of LLMs, meticulously crafted to understand industry-specific terminology and intricate operational relationships rooted in drilling domain expertise. Every agent is linked to manuals, standards, specific operational drilling data source and it has unique instructions optimizing computational efficiency and driving cost savings. Moreover, to ensure cost-effectiveness, LLMs are selectively employed, while repetitive user inquiries are addressed through data retrieval from an aggregated storage. Consistent responses to user queries are provided through text and graphs revealing insights from drilling operations, standards, manuals, practices, and lessons learned. Applied methodology efficiently navigates inside the pre-processed user database relying on custom agents developed. Communication with the user is set in the form of chat framed within a web application, and queries on the database about hundreds of wells are answered in less than a minute. Methodology can analyze data and graphs by comparing Key Performance Indicators (KPIs). A wide range of graph output is represented by bar charts, scatter plots, and maps, including self-explaining charts like Time versus Depth Curve (TVD) with Non-Productive Time (TVD) events marked with details underneath. Understanding the data content, data preparation steps, and user needs is fundamental to a successful methodology application. The proposed Generative AI methodology is not just a tool for data interpretation, but a catalyst for real-time decision-making in complex drilling environments. Its integration into oil and gas drilling operations signifies a pivotal advancement, showcasing its transformative potential in revolutionizing the industry's landscape. This approach leads to notable cost reductions, improved resource utilization, and increased productivity, paving the way for a new era in drilling operations. A method driven by selective, cost-effective, and domain specific LLM agents stands poised to revolutionize drilling operations, seamlessly integrating generative AI to amplify efficiency and propel informed decision-making within the oil and gas drilling sector.
- Research Article
18
- 10.1007/s40368-025-01012-x
- Feb 22, 2025
- European Archives of Paediatric Dentistry
PurposeThe use of large language models (LLMs) in generative artificial intelligence (AI) is rapidly increasing in dentistry. However, their reliability is yet to be fully founded. This study aims to evaluate the diagnostic accuracy, clinical applicability, and patient education potential of LLMs in paediatric dentistry, by evaluating the responses of six LLMs: Google AI’s Gemini and Gemini Advanced, OpenAI’s ChatGPT-3.5, -4o and -4, and Microsoft’s Copilot.MethodsTen open-type clinical questions, relevant to paediatric dentistry were posed to the LLMs. The responses were graded by two independent evaluators from 0 to 10 using a detailed rubric. After 4 weeks, answers were reevaluated to assess intra-evaluator reliability. Statistical comparisons used Friedman’s and Wilcoxon’s and Kruskal–Wallis tests to assess the model that provided the most comprehensive, accurate, explicit and relevant answers.ResultsVariations of results were noted. Chat GPT 4 answers were scored as the best (average score 8.08), followed by the answers of Gemini Advanced (8.06), ChatGPT 4o (8.01), ChatGPT 3.5 (7.61), Gemini (7,32) and Copilot (5.41). Statistical analysis revealed that Chat GPT 4 outperformed all other LLMs, and the difference was statistically significant. Despite variations and different responses to the same queries, remarkable similarities were observed. Except for Copilot, all chatbots managed to achieve a score level above 6.5 on all queries.ConclusionThis study demonstrates the potential use of language models (LLMs) in supporting evidence-based paediatric dentistry. Nevertheless, they cannot be regarded as completely trustworthy. Dental professionals should critically use AI models as supportive tools and not as a substitute of overall scientific knowledge and critical thinking.
- Research Article
56
- 10.1016/j.compchemeng.2024.108723
- May 9, 2024
- Computers and Chemical Engineering
Generative AI and process systems engineering: The next frontier
- Research Article
1
- Feb 1, 2025
- Radiologic technology
To compare the performance of multiple large language models (LLMs) on a practice radiography certification exam. Using an exploratory, nonexperimental approach, 200 multiple-choice question stems and options (correct answers and distractors) from a practice radiography certification exam were entered into 5 LLMs: ChatGPT (OpenAI), Claude (Anthropic), Copilot (Microsoft), Gemini (Google), and Perplexity (Perplexity AI). Responses were recorded as correct or incorrect, and overall accuracy rates were calculated for each LLM. McNemar tests determined if there were significant differences between accuracy rates. Performance also was evaluated and aggregated by content categories and subcategories. ChatGPT had the highest overall accuracy of 83.5%, followed by Perplexity (78.9%), Copilot (78.0%), Gemini (75.0%), and Claude (71.0%). ChatGPT had a significantly higher accuracy rate than did Claude (P , .001) and Gemini (P 5 .02). Regarding content categories, ChatGPT was the only LLM to correctly answer all 38 patient care questions. In addition, ChatGPT had the highest number of correct responses in the areas of safety (38/48, 79.2%) and procedures (50/59, 84.7%). Copilot had the highest number of correct responses in the area of image production (43/55, 78.2%). ChatGPT also achieved superior accuracy in 4 of the 8 subcategories. Findings from this study provide valuable insights into the performance of multiple LLMs in answering practice radiography certification exam questions. Although ChatGPT emerged as the most accurate LLM for this practice exam, caution should be exercised when using generative artificial intelligence (AI) models. Because LLMs can generate false and incorrect information, responses must be checked for accuracy, and the models should be corrected when inaccurate responses are given. Among the 5 LLMs compared in this study, ChatGPT was the most accurate model. As interest in generative AI continues to increase and new language applications become readily available, users should understand the limitations of LLMs and check responses for accuracy. Future research could include additional practice exams in other primary pathways, including magnetic resonance imaging, nuclear medicine technology, radiation therapy, and sonography.
- Research Article
20
- 10.1016/j.imu.2024.101533
- Jan 1, 2024
- Informatics in Medicine Unlocked
Generative AI and large language models: A new frontier in reverse vaccinology
- Research Article
1
- 10.1016/j.ridd.2025.104970
- May 1, 2025
- Research in developmental disabilities
Attributional patterns toward students with and without learning disabilities: Artificial intelligence models vs. trainee teachers.
- Research Article
3
- 10.69554/kzrs2422
- Sep 1, 2023
- Journal of AI, Robotics & Workplace Automation
This paper introduces a new field of AI research called machine unlearning and examines the challenges and approaches to extend machine unlearning to generative AI (GenAI). Machine unlearning is a model-driven approach to make an existing artificial intelligence (AI) model unlearn a set of data from its learning. Machine unlearning is becoming important for businesses to comply with privacy laws such as General Data Protection Regulation (GDPR) customer’s right to be forgotten, to manage security and to remove bias that AI models learn from their training data, as it is expensive to retrain and deploy the models without the bias or security or privacy compromising data. This paper presents the state of the art in machine unlearning approaches such as exact unlearning, approximate unlearning, zero-shot learning (ZSL) and fast and efficient unlearning. The paper highlights the challenges in applying machine learning to GenAI which is built on a transformer architecture of neural networks and adds more opaqueness to how large language models (LLM) learn in pre-training, fine-turning, transfer learning to more languages and in inference. The paper elaborates on how models retain the learning in a neural network to guide the various machine unlearning approaches for GenAI that the authors hope can be built upon their work. The paper suggests possible futuristic directions of research to create transparency in LLM and particularly looks at hallucinations in LLMs when they are extended to do machine translation for new languages beyond their training with ZSL to shed light on how the model stores its learning of newer languages in its memory and how it draws upon it during inference in GenAI applications. Finally, the paper calls for collaborations for future research in machine unlearning for GenAI, particularly LLMs, to add transparency and inclusivity to language AI.