Abstract

Slow-hash algorithms are proposed to defend against traditional offline password recovery by making the hash function very slow to compute. In this paper, we study the problem of slow-hash recovery on a large scale. We attack the problem by proposing a novel concurrent model that guesses the target password hash by leveraging known passwords from a largest-ever password corpus. Previously proposed password-reused learning models are specifically designed for targeted online guessing for a single hash and thus cannot be efficiently parallelized for massive-scale offline recovery, which is demanded by modern hash-cracking tasks. In particular, because the size of a probabilistic context-free grammar (PCFG for short) model is non-trivial and keeping track of the next most probable password to guess across all global accounts is difficult, we choose clever data structures and only expand transformations as needed to make the attack computationally tractable. Our adoption of max-min heap, which globally ranks weak accounts for both expanding and guessing according to unified PCFGs and allows for concurrent global ranking, significantly increases the hashes can be recovered within limited time. For example, 59.1% accounts in one of our target password list can be found in our source corpus, allowing our solution to recover 20.1% accounts within one week at an average speed of 7200 non-identical passwords cracked per hour, compared to previous solutions such as oclHashcat (using default configuration), which cracks at an average speed of 28 and needs months to recover the same number of accounts with equal computing resources (thus are infeasible for a real-world attacker who would maximize the gain against the cracking cost). This implies an underestimated threat to slow-hash protected password dumps. Our method provides organizations with a better model of offline attackers and helps them better decide the hashing costs of slow-hash algorithms and detect potential vulnerable credentials before hackers do.

Highlights

  • Slow-hash algorithms are regarded as safe protections of low-entropy passwords without secret keys

  • Identifying a less-studied issue which degrades the efficiency of massive-scale slow-hash recovery: weak accounts are blocked by stronger accounts during expanding and guessing

  • We study the problem of offline password recovery in this paper

Read more

Summary

Introduction

Slow-hash algorithms are regarded as safe protections of low-entropy passwords without secret keys. The existing cross-site recovery models have to try a constant number of guesses for each target account (e.g., 1000) because, unlike trawling models which generate a fixed sequence of guesses (i.e., transformation rules) for all accounts, cross-site models have varied source passwords for different accounts which make the subsequent expansions less deterministic than those in trawling models This means a long time will be spent on a difficult account (e.g., 20s in total, assuming that the hashing process costs 0.02s for each of the 1000 guesses) before it has chance to recover the remaining accounts, among which many could be easier to crack. Identifying a less-studied issue which degrades the efficiency of massive-scale slow-hash recovery: weak accounts are blocked by stronger accounts during expanding and guessing We solve this by proposing concurrent global prioritization and overcome two key shortcomings of the usage of a huge global heap which the method brings in.

Offline Password Guessing
Password Reuse
Bcrypt Recovery
Recovery Metric
Our Solution
PCFG Expansion
Global Ranking
Bcrypt Trial
Experimental Evaluation
Experimental Setting
Methods compared:
Comparison of Various Approaches
12 GB more space below
Impact of Parallelization
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.