Abstract

The concept drift is a challenge in click fraud detection wherein frequent changes in the actual status label of publishers complicate the identification of publishers’ fraudulent behavior. However, using transfer learning by leveraging the knowledge from previously learned domains to newer domains can make these differences more accessible while saving training time and improving the model’s performance. But the absence of other user-click datasets available publicly poses complexity in using transfer learning. Therefore, to use transfer learning towards predicting the publisher’s conduct concerning change in their labels, this work aims to transform 1D user-click non-image features into a 2D graphical image. The work proposes a deep convolution neural network-based transfer learning (DCNNTr) framework that utilizes different pre-trained Deep Convolutional Neural Network (DCNN) models as powerful feature extractors that leverage prior learnings to avert learning from scratch. The robust features extracted by the feature extractors help identify the conduct of publishers and classify them as fraudulent or non-fraudulent from 2D graphical images using machine learning models. By leveraging the weighted layers in extracting features, DCNN models utilize their special properties of being computationally efficient and locally focused. We evaluated the designed model on the FDMA2012 user-click dataset using precision, recall, F1-score, and AUC. Current work uniquely transforms the time series user-click non-image data into an image form. The experimental results demonstrate that features extracted with DenseNet121 followed by GTB have identified the fraudulent publishers with an average precision score of 79.8%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call