Abstract

One of the biggest challenges facing trust and reputation systems (TRSs) is evaluating their ability to withstand various attacks, as these systems are vulnerable to multiple types of attacks. While simulation methods have been used to evaluate TRSs, they are limited because they cannot detect new attacks and do not ensure the system’s overall robustness. To address this limitation, verification methods have been proposed that can examine the entire state space and detect all possible attacks. However, these methods are not always practical for large models and real environments because they suffer from the state space explosion problem. To tackle this issue, we propose a deep reinforcement learning approach for evaluating the robustness of TRSs. In this approach, an agent can learn how to attack a system and find the best attack plan without prior knowledge. Additionally, our method avoids the state space explosion problem because it uses a deep Q-network instead of storing and examining the entire state space. We tested our proposed method on five well-known reputation models, assessing various attack goals such as selfishness, maliciousness, competition, and slandering. The results showed that our method was successful in identifying the best attack plan and executing it successfully in the system, demonstrating its effectiveness in evaluating the robustness of TRSs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call