Abstract
Encrypted network traffic has been known to leak information about their underlying content through side-channel information leaks. Traffic fingerprinting attacks exploit this by using machine learning techniques to threaten user privacy by identifying user activities such as website visits, videos streamed, and messenger app activities. Although state-of-the-art traffic fingerprinting attacks have high performances, even undermining the latest defenses, most of them are developed under the closed-set assumption. To deploy them in practical situations, it is important to adapt them to the open-set scenario, which allows the attacker to identify its target content while rejecting other background traffic. At the same time, in practice, these models need to be deployed on in-networking devices such as programmable switches, which have limited memory and computation power. Model weight quantization can reduce the memory footprint of deep learning models while at the same time, allowing inference to be done as integer operations as opposed to floating point operations. Open-set classification in the domain of traffic fingerprinting has not been explored well in prior work and none of them explored the effect of quantization on the open-set performance of such models. In this work, we propose a framework for robust open-set classification of encrypted traffic based on three key ideas. First, we show that a well-regularized deep learning model improves the open-set classification and then we propose a novel open-set classification method with three variants that perform consistently over multiple datasets. Next, we show that traffic fingerprinting models can be quantized without a significant drop in both closed-set and open-set accuracy and therefore, they can be readily deployed on in-network computing devices. Finally, we show that when the above three components are combined, the resulting open-set classifier outperforms all other open-set classification methods evaluated across five datasets with a minimum and maximum increase in F1_Score of 8.9% and 77.3% respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.