Abstract

SummaryWith an exponential increase in the data size and complexity of various documents to be investigated, existing methods of network forensics are found not much efficient with respect to accuracy and detection ratio. The existing techniques for network forensic analysis exhibit inherent limitations while processing a huge volume, variety, and velocity of data. It makes network forensic a time‐consuming and resource‐consuming task. To balance time taken and output delivered, these existing techniques put a limit on the amount of data under analysis, which results in a polynomial time complexity of these solutions. So to mitigate these issues, in this paper, we propose an effective framework to overcome the limitation to handle large volume, variety, and velocity of data. An architectural setup that consists of MapReduce framework on top of Hadoop Distributed File System environment is proposed in this paper. The proposed framework demonstrates its capability to handle issues of storage and processing of big data using cloud computing. Also, in the proposed framework, supervised machine learning (random forest‐based decision tree) algorithm has been implemented to demonstrate better sensitivity. To train and validate the model, online available data set from CAIDA is taken and university network traffic samples, with increasing size, has been taken for experiment. Results thus obtained confirm the superiority of the proposed framework in network forensics, with an average accuracy of 99.34% (malicious and nonmalicious traffic).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call