Abstract

In this study we test ChatGPT-4's ability to provide accurate information about the origins and evolution of SWOT analysis, perhaps the most widely used strategy tool in practice worldwide. ChatGPT-4 is tested for historical accuracy and hallucinations. The API is prompted using a Python script with a series of structured questions from an Excel file and the results are recorded in another Excel file and rated on a binary scale. Our findings present a nuanced view of ChatGPT-4's capabilities. We observe that while ChatGPT-4 demonstrates a high level of proficiency in describing and outlining the general concept of SWOT analysis, there are notable discrepancies when it comes to detailing its origins and evolution. These inaccuracies range from minor factual errors to more serious hallucinations that deviate from evidence in scholarly publications. However, we also find that ChatGPT-4 comes up with spontaneous historically accurate facts. Our interpretation of the result is that ChatGPT is largely trained on easily available websites and to a very limited extent has been trained on scholarly publications on SWOT analysis, especially when these are behind a paywall. We conclude with four propositions for future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.