Abstract

Deep learning practice, including in wireless communications, often relies on trial and error to optimize neural network (NN) structures and their corresponding hyperparameters. We show that Reservoir Computing, especially the Echo State Network (ESN), is an ideal learning-based equalizer for a general fading channel and for an ESN equalizing a channel with known statistics, theoretically derive its optimum reservoir weights which are randomly initialized in state-of-the-art and lack interpretability. The theoretical results are validated with simulations. In contrast to existing literature, this work analytically adapts the NN structure to the problem being addressed, guaranteeing optimum equalization under known channel statistics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call