Abstract

ABSTRACT Researchers in ‘hard' science disciplines are exploring the transformative potential of Artificial Intelligence (AI) for advancing research in their fields. Their colleagues in ‘soft' science, however, have produced thus far a limited number of articles on this subject. This paper addresses this gap. Our main hypothesis is that existing Artificial Intelligence Large Language Models (LLMs) can closely align with human expert assessments in specialized social science surveys. To test this, we compare data from a multi-country expert survey with those collected from the two powerful LLMs created by OpenAI and Google. The statistical difference between the two sets of data is minimal in most cases, supporting our hypothesis, albeit with certain limitations and within specific parameters. The tested language models demonstrate domain-agnostic algorithmic accuracy, indicating an inherent ability to incorporate human knowledge and independently replicate human judgment across various subfields without specific training. We refer to this property as the ‘implicit intelligence' of Artificial Intelligence, representing a highly promising advancement for social science research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call