Abstract

ABSTRACT Efficient energy management in WSNs is pivotal for prolonged network lifetime and sustained performance. This research introduces a novel approach to energy optimization through the integration of an ARIMA-driven feature selection process and an Actor-Critic Reinforcement Learning model. The Intel Lab dataset, encompassing data from 54 nodes distributed over a month, serves as the basis for experimentation. In the proposed methodology, the experimental setup ensures the validity of the study, leveraging the realism of the Intel Lab dataset. The data collection process captures the intricacies of WSN dynamics, with temperature records collected from strategically positioned nodes. The ARIMA-driven feature selection method refines the dataset, capturing temporal dependencies critical for energy prediction. The Actor-Critic model, a hybrid of policy and value-based reinforcement learning, dynamically adapts energy allocation strategies based on learned policies and offers a dynamic solution to the challenges posed by WSNs’ ever-changing environments. Comparative analysis with methods reveals lower energy consumption of 0.32 mJ, an extended network lifetime of 1501 rounds, and higher prediction accuracy of 98%. The study’s implications extend beyond the specific algorithms, suggesting a shift toward adaptive learning models in WSNs. The findings open avenues for future research in the integration of machine learning models for sustainable and efficient WSN deployments, emphasizing the growing importance of dynamic adaptation in sensor networks. Moreover, limitations, such as simulation realism and computational complexity, are acknowledged, prompting avenues for future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call