Nowadays, web robots are predominantly used for auto-accessing web content, sharing almost one-third of the total web traffic and often posing threats to various web applications’ security, privacy, and performance. Detecting these robots is essential, and both online and offline methods are employed. One popular offline method is the use of weblog feature-based automated learning. However, this method alone cannot accurately identify web robots that continuously evolve and camouflage. Web content features combined with weblog features are used to detect such robots based on the assumption that human users exhibit specific interests while robots randomly navigate web pages. State-of-the-art web content-based feature methods lack the ability to generate coherent topics, which can confound the performance of classification models. Therefore, we propose a new content semantic feature extraction method that uses the LDA2Vec topic model, combining the strengths of LDA and the Word2Vec model to produce more semantically coherent topics by exploiting website content for a web session. To effectively detect web robots, web resource content semantic features are combined with log-based features in the proposed web robot detection approach. The proposed approach is evaluated in an e-commerce website access logs and content data. The F-score, balanced accuracy, G-mean, and Jaccard similarity are used for performance measures, and the coherence score metric is used to determine the number of topics for a session. Experimental results demonstrate that a combination of weblogs and content semantic features is effective in web robot detection.
Read full abstract