This paper explores two important ways in which close reading differs from natural language processing, the use of computer programming to decode, process, and replicate messages within a human language. It does so in order to highlight distinctive features of close reading that are not replicated by natural language processing. The first point of distinction concerns the nature of the meaning generated in each case. While natural language processing proceeds on the principle that a text’s meaning can be deciphered by applying the rules governing the language in which the text is written, close reading is premised on the idea that this meaning lies in the interplay that the text prompts within readers. While the semantic theory of meaning upon which natural language processing programs are based is often taken for granted today, I draw from phenomenological and hermeneutic theories, particularly Wolfgang Iser and Hans-Georg Gadamer, to explain why a different theory of meaning is necessary for understanding the meaning generated by close reading. Second, while natural language processing programs are considered successful when they generate what epistemologists call true beliefs about a text, I argue that close reading aims first and foremost at the development, not of true belief, but of understanding. To develop this distinction, I draw from recent scholarship on the epistemology of education, including work by Duncan Pritchard, to explain how understanding differs from true belief and why attainment of the latter is less educationally significant than the former.
Read full abstract