Abstract

Evaluating the effectiveness of defense technologies mandates the inclusion of a human element, specifically if these technologies target human cognition and emotions. One of the biggest challenges that face researchers in the realm of behavioral cybersecurity is participant recruitment. Researchers often rely on college students, the general public, real-world hackers, or a hard-to-reach population (e.g., professional red teamers) to test the effectiveness of cybersecurity defense techniques. However, recruiting participants from these populations has drawbacks, including but not limited to: high cost, time constraints, and manageability and accessibility issues. This research explored the applicability of using two popular crowdsourcing platforms, Amazon Mechanical Turk and Prolific, to conduct web hacking experiments. Our study is the first to use crowdsourcing platforms to run hacking experiments for scientific purposes. While the recruitment is challenging, the paradigm of existing crowdsourcing platforms can be useful for understanding adversarial behavior, as it facilitates access to a diverse set of participants and allows researchers to conduct longitudinal and cross-cultural assessments. In particular, crowdsourcing platforms offer a great opportunity for cybersecurity researchers to investigate the Oppositional Human Factors (OHFs) in a manageable and flexible way.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call