Abstract

The purpose of the research is to explore and develop Deep Reinforcement Learning and Q-Learning algorithms in order to improve Ethereum cybersecurity in contract vulnerabilities, the smart contract market and research leadership in the area. Deep Reinforcement Learning (Deep RL) is gaining popularity among AI researchers due to its ability to handle complex, dynamic, and particularly high-dimensional cyber protection problems. The benchmark of RL is goal-oriented behavior that increases rewards and decreases penalties or losses, and enhances real-time interaction between an agent and its surroundings. The research paper examines the three major cryptocurrencies (Bitcoin, Litecoin and Ethereum) and the role played by cyber-attacks.The Design Science Research Paradigm as applied in Information Systems research was used in this research, as it is hinged on the idea that information and understanding of a design problem and its solution are attained in the crafting of an artefact. The proposed constructs were in the form of Deep Reinforcement Learning and Q-Learning algorithms designed to improve Ethereum cybersecurity. Smart contracts on the Ethereum blockchain can automatically enforce contracts made between two unknown parties. Blockchain (BC) and artificial intelligence (AI) are used together to strengthen one another's skills and complement one another. Consensus algorithms (CAs) of BC and deep reinforcement learning (DRL) in ETS were thoroughly reviewed. In order to integrate many DCRs and provide grid services, this article suggests an effective incentive-based autonomous DCR control and management framework. This framework simultaneously adjusts the grid's active power with accuracy, optimizes DCR allocations, and increases profits for all prosumers and system operators. The best incentives in a continuous action space to persuade prosumers to reduce their energy consumption were found using a model-free deep deterministic policy gradient-based strategy. Extensive experimental experiments were carried out utilizing real-world data to show the framework's efficacy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.