Abstract

Purpose The purpose of this paper is to investigate the impact and techniques for mitigating the effects of web robots on usage statistics collected by Open Access (OA) institutional repositories (IRs). Design/methodology/approach A close review of the literature provides a comprehensive list of web robot detection techniques. Reviews of system documentation and open source code are carried out along with personal interviews to provide a comparison of the robot detection techniques used in the major IR platforms. An empirical test based on a simple random sample of downloads with 96.20 per cent certainty is undertaken to measure the accuracy of an IR’s web robot detection at a large Irish University. Findings While web robot detection is not ignored in IRs, there are areas where the two main systems could be improved. The technique tested here is found to have successfully detected 94.18 per cent of web robots visiting the site over a two-year period (recall), with a precision of 98.92 per cent. Due to the high level of robot activity in repositories, correctly labelling more robots has an exponential effect on the accuracy of usage statistics. Research limitations/implications This study is performed on one repository using a single system. Future studies across multiple sites and platforms are needed to determine the accuracy of web robot detection in OA repositories generally. Originality/value This is the only study to date to have investigated web robot detection in IRs. It puts forward the first empirical benchmarking of accuracy in IR usage statistics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call