Abstract

Role-based access control (RBAC) is adopted in the information and communication technology domain for authentication purposes. However, due to a very large number of entities within organizational access control (AC) systems, static RBAC management can be inefficient, costly, and can lead to cybersecurity threats. In this article, a novel hybrid RBAC model is proposed, based on the principles of offline deep reinforcement learning (RL) and Bayesian belief networks. The considered framework utilizes a fully offline RL agent, which models the behavioral history of users as a Bayesian belief-based trust indicator. Thus, the initial static RBAC policy is improved in a dynamic manner through off-policy learning while guaranteeing compliance of the internal users with the security rules of the system. By deploying our implementation within the smart grid domain and specifically within a Distributed Energy Resources (DER) ecosystem, we provide an end-to-end proof of concept of our model. Finally, detailed analysis and evaluation regarding the offline training phase of the RL agent are provided, while the online deployment of the hybrid RL-based RBAC model into the DER ecosystem highlights its key operation features and salient benefits over traditional RBAC models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call