Abstract

In recent years, incoherent approaches in the generation, transport, and detection of millimeter-wave Radio-over-Fiber signals have attracted a lot of attention due to their inherent technological simplicity and cost-effectiveness, which is however at the expense of additional phase-induced noises caused at the receiver's output. The power of deep learning, a subset of machine learning, has appeared recently to be very effective to improve the performance of communication blocks, particularly in signal compression, signal detection, and end-to-end communications. In this article, we propose and demonstrate a new receiver architecture by incorporating deep learning at the receiver. The proposed receiver is demonstrated on an unlocked heterodyning Radio-over-Fiber link. Results show that the proposed deep learning based receiver exhibits a greater tolerance against phase-induced noises, with a bit error rate improvement from <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$10^{-1}$</tex-math></inline-formula> to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$10^{-5}$</tex-math></inline-formula> . In addition, the proposed deep learning based receiver performs better, in terms of bit error rate, than conventional self-homodyning based approach when the frequency spacing between reference tone and the main data signal is small.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call