Abstract
Background: Large language models (LLMs) have seen a significant boost recently in the field of natural language processing (NLP) due to their capabilities in analyzing words. These autoregressive models prove robust in classification tasks where texts need to be analyzed and classified. Objectives: In this paper, we explore the power of base LLMs such as Generative Pre-trained Transformer 2 (GPT-2), Bidirectional Encoder Representations from Transformers (BERT), Distill-BERT, and TinyBERT in diagnosing acute inflammations of the urinary bladder and nephritis of the renal pelvis. Materials and Methods: the LLMs were trained and tested using supervised fine-tuning (SFT) on a dataset of 120 examples that include symptoms that may indicate the occurrence of these two conditions. Results: By employing a supervised fine-tuning method and carefully crafted prompts to present the data, we demonstrate the feasibility of using minimal training data to achieve a reasonable diagnostic, with overall testing accuracies of 100%, 100%, 94%, and 79%, for GPT-2, BERT, Distill-BERT, and TinyBERT, respectively.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have