Abstract

Abstract: This research delves into the dynamic capabilities of large language models (LLMs) by combining insights from two distinct studies. The first study, "Linking Microblogging Sentiments to Stock Price Movement: An Application of GPT-4," explores the efficacy of the GPT-4 Language Learning Model (LLM) compared to BERT in modeling same-day daily stock price movements for Apple and Tesla in 2017. The study leverages sentiment analysis of microblogging messages from the Stocktwits platform and employs a novel method for prompt engineering, emphasizing the contextual abilities of GPT-4. Logistic regression is utilized to evaluate the correspondence between extracted message contents and stock price movements, revealing GPT-4's substantial accuracy, outperforming BERT in five out of six months. However, practical considerations, including deployment costs and the need for fine-tuning prompts, are also acknowledged. The second study, "Do We Still Need BERT in the Age of GPT? Comparing the Benefits of Domain-Adaptation and In-Context-Learning Approaches to Using LLMs for Political Science Research," investigates the choices researchers face when employing LLMs in political science tasks. The study establishes benchmarks for various natural language processing (NLP) tasks within political science and compares two common approaches: domain-adapting smaller LLMs like BERT with unsupervised pre-training and supervised fine-tuning and querying larger LLMs like GPT-3 without additional training. Preliminary results suggest that, when labeled data is available, a finetuning-focused approach remains superior for text classification. By synthesizing these studies, this research contributes to the broader understanding of LLMs' capabilities and their applicability in diverse domains. It emphasizes the significance of prompt engineering in unlocking the contextual abilities of modern LLMs, providing insights into financial sentiment analysis and political text classification. The findings underscore the nuanced choices researchers must make in selecting appropriate LLMs, adapting them to specific domains, and designing effective prompts for optimal performance in different research contexts.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.