Abstract

Businesses may save money and make better use of their resources by exercising reasonable control over the storage of power materials. Poor timeliness and ineffectiveness are the result of the old model's inability to fully use a significant volume of acquired data and information. This research presents a reinforcement learning-based material storage control model for the power grid to solve these issues. In order to achieve zero storage, real-time tracking of consumables is implemented. Material for large disasters need manufacturer coordination for storage. The reinforcement learning approach is used to dynamically stock and distribute emergency supplies. The three-tiered dynamic storage approach based on reinforcement learning that is suggested in this study is shown to be very efficient and significantly cuts operational expenses for power grid firms by verifying particular data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call