Abstract
Large Language Models (LLM) such as the Generative-Pretrained-Transformer (GPT) and Large-Language-Model-Meta-AI (LLaMA) have attracted much attention. There is strong evidence that these models perform remarkably well in various natural language processing tasks. However, how to leverage them in domain-specific use cases and drive value remains an open question. Focusing on the digital transformation in the pharmaceutical manufacturing space, we propose that leveraging historical records of manufacturing deviations, as a mostly unstructured data source, in an organization can be beneficial for productivity, efficiency, quality, and compliance, specifically for addressing and closing new cases, or de-risking new manufacturing campaigns by identifying common themes and occurrences. Herein, by constructing an industrially relevant dataset, the ability of generative LLMs (e.g., GPT and Claude) and text embedding models in performing tasks related to pharmaceutical manufacturing deviations are studied. Generative models are evaluated for automating knowledge extraction from deviation reports in a mature organization, while embedding models are evaluated for identification of similar incidents from a large body of historical records using similarity analysis in vector space. Results show highly accurate outcomes for entity extraction tasks, especially with larger models, strong reasoning capabilities, as well as an interplay between the apparent reasoning and hallucination behavior of LLMs. Results also show the ability of embedding models for capturing semantics in certain deviation categories. Overall, these findings suggest significant potential for enhancing workflows in the pharmaceutical manufacturing through AI-driven tools, while also highlighting important questions that necessitate further research.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have