Abstract

PurposeThis study aims to report on an automatic deidentification system for labeling and obfuscating personally identifiable information (PII) in student-generated text.Design/methodology/approachThe authors evaluate the performance of their deidentification system on two data sets of student-generated text. Each data set was human-annotated for PII. The authors evaluate using two approaches: per-token PII classification accuracy and a simulated reidentification attack design. In the reidentification attack, two reviewers attempted to recover student identities from the data after PII was obfuscated by the authors’ system. In both cases, results are reported in terms of recall and precision.FindingsThe authors’ deidentification system recalled 84% of student name tokens in their first data set (96% of full names). On the second data set, it achieved a recall of 74% for student name tokens (91% of full names) and 75% for all direct identifiers. After the second data set was obfuscated by the authors’ system, two reviewers attempted to recover the identities of students from the obfuscated data. They performed below chance, indicating that the obfuscated data presents a low identity disclosure risk.Research limitations/implicationsThe two data sets used in this study are not representative of all forms of student-generated text, so further work is needed to evaluate performance on more data.Practical implicationsThis paper presents an open-source and automatic deidentification system appropriate for student-generated text with technical explanations and evaluations of performance.Originality/valuePrevious study on text deidentification has shown success in the medical domain. This paper develops on these approaches and applies them to text in the educational domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call