Abstract

To improve their cybersecurity knowledge and skills, students participate in a competitive game called capture the flag (CTF). CTF is used as an educational tool to improve students’ cybersecurity competency by solving challenges. As system vulnerabilities are the leading cause of cyberattacks on critical systems, cybersecurity personnel with knowledge and skills to detect system vulnerabilities are increasingly in demand. In this context, the importance of challenges concerning system vulnerabilities, such as pwnable, is gradually increasing in CTF competitions. Unlike other CTF challenges, solving a pwnable challenge requires considerable knowledge and skill. However, traditional evaluation methods in CTF (i.e., pass or non-pass) provide limited feedback regarding knowledge and skill gaps. To investigate this issue, we analyzed the results of the CTF competitions held by our research team over the past three years (2017, 2018, and 2020). Our analysis revealed the necessity for a new evaluation system that can provide detailed feedback to students, while reducing the grading burden on educators. Thus, to provide detailed feedback, we propose a cybersecurity training platform, Pwnable-Sherpa, which sets three detailed evaluation points for a given pwnable challenge. In addition, we designed our training platform with a multi-container architecture and an LLVM dummy pass, thereby saving time by grading each detailed assessment simultaneously rather than sequentially.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call