Abstract

Security analysts rely on scenarios to assess vulnerabilities, project attacks, and decide on security requirements that mitigate the threat. However, eliciting natural language scenarios from stakeholders can be an ad-hoc process and subject to ambiguity and incompleteness. In this article, we examine systematic scenario elicitation by introducing a method based on user stories that uses a simplified process model of iterative scenario refinement. The process consists of three steps: 1) eliciting an interaction statement that describes a critical action performed by a user or system process; 2) eliciting one or more descriptive statements about a technology that enables the interaction; and 3) refinement of the technology into technical variants that correspond to design alternatives. We empirically evaluated our method by implementing our prototype in a user study that collects 30 security scenarios from participants. Based on our analysis, our proposed method is shown effective. Participants had a 100 percent task completion rate with 57 percent of participants achieving complete task-success, and the remaining 43 percent of participants achieving partial task-success. We also show the effect of security domain knowledge, and the benefit of using structure when collecting security requirements in natural language text. Finally, we present lessons learned and future research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call