Abstract
Social media offers a rich source of real-time health data, including potential vaccine reactions. However, extracting meaningful insights is challenging due to the noisy nature of social media content. This paper explores using large language models (LLMs) and prompt engineering to detect personal mentions of vaccine reactions. Different prompting strategies were evaluated on two LLM models (GPT-3.5 and GPT-4) using Reddit data focused on shingles (zoster) vaccines. Zero-shot and few-shot learning approaches with both standard and chain-of-thought prompts were compared. The findings demonstrate that GPT-based models with carefully crafted chain-of-thought prompts could identify the relevant social media posts. Few-shot learning helped GPT4 models to identify more of the marginal cases, although less precisely. The use of LLMs for classification with lightweight supervised pretrained language models (PLMs) found that PLMs outperform LLMs. However, a potential benefit in using LLMs to help identify records for training PLMs was revealed, especially to eliminate false negatives, and LLMs could be used as classifiers when insufficient data exists to train a PLM.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.