Abstract
Learned Bloom Filters (LBFs) have been recently proposed as an alternative to traditional Bloom filters that can reduce the amount of memory needed to achieve a target false positive probability when representing a given set of elements. LBFs rely on Machine Learning models combined with traditional Bloom filters. However, if LBFs are going to be used as an alternative to Bloom filters, their security must be also be considered. In this paper, the security of LBFs is studied for the first time and a vulnerability different from those of traditional Bloom filters is uncovered. In more detail, an attacker can easily create a set of elements that are not in the filter with a much larger false positive probability than the target for which the filter has been designed. The constructed attack set can then be used to for example launch a denial of service attack against the system that uses the LBF. A malicious URL case study is used to illustrate the proposed attacks and show their effectiveness in increasing the false positive probability of LBFs. The dataset under consideration includes nearly 485K URLs where 16.47% of them are malicious URLs. Unfortunately, it seems that mitigating this vulnerability is not straightforward.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.