Abstract

AbstractThis chapter examines material published in the field of digital humanities in 2019. Key work published this year has grappled with longstanding conflicts at the heart of the field, on whether and how computational methods should be applied to humanities data, and who should validate such methodologies. The chapter begins with new work by Ted Underwood, who makes the case for hypothesis-driven methods and the modelling of humanities data. It discusses how recent work in computational literary studies had appeared to resist the trap into which much previous work had fallen, that is, work that was perceived to fall into the binaries of distant vs. close reading, computation vs. engagement, objectivity vs. subjectivity. The continued friction over the appropriateness of certain computational methodological approaches was amplified by new work that called into question the statistical methods of a number of key works in the field over past years. Nan Z. Da’s critique of computational literary studies through the lens of statistical rigour imploded the uneasy truce between computational methods and the more traditional questions and methods at the heart of literary studies. Da’s article reopens the debate about how digital humanities scholars use statistical methods, and how greater reliance on such methods may demand greater cross-disciplinary oversight to ensure that they are used in a way that is both robust and appropriate. Her contribution is examined alongside the rash of responses to it from key scholars in the field which produced an important snapshot of the fractures and fundamentals of data-driven literary studies. I then turn to new and timely work by James E. Dobson, which argues for a third way, a Critical Digital Humanities that engages critically with computational as well as humanistic scholarship.I survey important contributions on the impact of mass digitization, historicism and the archive, and how to study history in the age of digital archives and the historic web. Ian Milligan’s work provides a much-needed introduction to the potentials and pitfalls of studying recent history through the digital traces left behind. It self-consciously identifies areas in which greater cross-disciplinary scholarship and critical engagement will be needed as this area of study matures. Discussion then turns to work by Nanna Bonde Thylstrup on digital waste, which shows how connecting new media theory to waste studies can provide an important frame through which to examine issues of data toxicity and pollution. This work sets the stage for two landmark books on sex and race which implore us to take a more careful look at the toxic technologies we build and the questions we ask of them. Both Caroline Criado Perez and Ruha Benjamin examine the damage done by the reliance of data systems on the ‘default’, frequently a white male, forcing us to see anything that departs from this norm as deviant. These works make a powerful case for reinventing the systems we increasingly rely on, questioning the underlying prejudice that created them, and rethinking the modes of meaning-making ascribed to them, especially when that narrative so often assumes a benign neutrality. Finally, I examine these works alongside a new volume of essays on digital humanities and intersectionality edited by Barbara Bordalejo and Roopika Risam, which serves to amplify and contextualize the need for the approaches taken by Criado Perez and Benjamin, showing how deeply enmeshed within the field these power structures are.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call