Abstract

The Confidentiality-Availability-Integrity (CIA) triad is a time-honored warhorse of security analysis. Qualitative assessment of security requirements based on the CIA triad is an important step in many standard procedures for selecting and deploying security controls. However, little attention has been devoted to monitoring how the CIA triad is used in practice, and how reliable are experts’ assessment that make use of it. In this paper, a panel of 20 security experts was asked to use the CIA triad in 45 practical security scenarios involving UAV-to-ground transmission of control and information data. The experts’ responses were analyzed using Fleiss’ kappa, a specific statistics test for inter-rater reliability. Results show agreement to be low (from 13.8% to 20.1% depending on the scenario), but higher on scenarios where the experts’ majority estimates tight security to be needed. Low number of polled experts is found to affect inter-rater reliability negatively, however, increasing this number beyond ten does not provide additional reliability. A bias to give a specific rate could be identified with 14 out of the 20 experts. The six unbiased experts showed a higher inter-rater agreement. These findings suggest that (i) there is no guaranteed “safety in numbers” for recruiting security expert panels and (ii) expert selection for security rating processes should include verification of agreement level on toy problems for all subsets of the panel to highlight subsets showing high inter-rater agreement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call