Abstract

As the use of data mining and machine learning methods in the humanities becomes more common, it will be increasingly important to examine implicit biases, assumptions, and limitations these methods bring with them. This article makes explicit some of the foundational assumptions of machine learning methods, and presents a series of experiments as a case study and object lesson in the potential pitfalls in the use of data mining methods for hypothesis testing in literary scholarship. The worst dangers may lie in the humanist's ability to interpret nearly any result, projecting his or her own biases into the outcome of an experiment-perhaps all the more unwittingly due to the superficial objectivity of computational methods. We argue that in the digital humanities, the standards for the initial production of evidence should be even more rigorous than in the empirical sciences because of the subjective nature of the work that follows. Thus, we conclude with a discussion of recommended best practices for making results from data mining in the humanities domain as meaningful as possible. These include methods for keeping the the boundary between computational results and subsequent interpretation as clearly delineated as possible.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.