Abstract

Power allocation algorithms are implemented to deal with spectrum sharing interference. Deep Reinforcement Learning-based models have recently been used in unpredictable environments requiring fast decision-making. Additional training for adapting the pre-trained model to those behaviors is needed. Nevertheless, the previous training knowledge is replaced and forgotten with the current network dynamics data. This letter proposes an Experienced-Instance Transfer strategy for exploiting the historical and current knowledge to improve the network sum rate in a multi-cell network. The results show that exploiting the knowledge with experience replay approaches improves the Deep Q-Network model’s learning capabilities and reduces task degradation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call