Abstract

The freedom of the Deep Web offers a safe place where people can express themselves anonymously but they also can conduct illegal activities. In this paper, we present and make publicly available a new dataset for Darknet active domains, which we call ”Darknet Usage Text Addresses” (DUTA). We built DUTA by sampling the Tor network during two months and manually labeled each address into 26 classes. Using DUTA, we conducted a comparison between two well-known text representation techniques crossed by three different supervised classifiers to categorize the Tor hidden services. We also fixed the pipeline elements and identified the aspects that have a critical influence on the classification results. We found that the combination of TFIDF words representation with Logistic Regression classifier achieves 96.6% of 10 folds cross-validation accuracy and a macro F1 score of 93.7% when classifying a subset of illegal activities from DUTA. The good performance of the classifier might support potential tools to help the authorities in the detection of these activities.

Highlights

  • If we think about the web as an ocean of data, the Surface Web is no more than the slight waves that float on the top

  • As we have labeled Darknet Usage Text Addresses” (DUTA) manually, we realized that some forums on Hidden Services” (HS) contain numerous web pages and all of them are related to a single class i.e. we found a forum about childpornography that has more than 800 pages of textual content, so we split it up into single samples representing one single forum page, and we added them to the dataset

  • We can see that the pipeline of Term Frequency Inverse Document Frequency (TF-IDF) with Logistic Regression (LR) achieves the highest value with a macro F1 score of 93.7% and the highest cross-validation accuracy of 96.6%

Read more

Summary

Introduction

If we think about the web as an ocean of data, the Surface Web is no more than the slight waves that float on the top. The Surface Web is the portion of the web that can be crawled and indexed by the standard search engines, such as Google or Bing Despite their existence, there is still an enormous part of the web remained without indexing due to its vast size and the lack of hyperlinks, i.e. not referenced by the other web pages. This part, that can not be found using a search engine, is known as Deep Web (Noor et al, 2011; Boswell, 2016). The content might be locked and requires human interaction to access e.g. to solve a CAPTCHA or to enter a log-in credential to access This type of web pages is referred to as ”database-driven” websites. The community of Tor refers to Darknet websites as ”Hidden Services” (HS) which can be accessed via a special browser called Tor Browser

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.