End-to-end learning allows communication systems to achieve optimal performance compared with conventional blockwise structure design. By modeling the channel with neural networks and training the transmitter and receiver on this differentiable channel, the whole system can be jointly optimized. However, in existing schemes, channel modeling methods, such as the generative adversarial network and long short-term memory network, have complex architectures and cannot track channel changes, leading to less effective end-to-end learning. Meanwhile, the complexity of neural networks deployed at the transmitter and receiver is too high for practical applications. In this work, we propose an efficient and low-complexity end-to-end deep learning framework and experimentally validate it on a 100G passive optical network. It uses a noise adaptation network to model channel response and noise distribution and employs offline pretraining and online tracking training to improve the efficiency and accuracy of channel modeling. For the transmitter, it consists of a pattern-dependent look-up table (PDLUT) based on a neural network (NN-PDLUT) with a single convolutional layer. Further, the receiver is also an NN with a single convolutional layer; thus, the end-to-end signal processing is extremely simple. The experimental results show that end-to-end learning improves the receiver sensitivity by 0.85 and 1.59 dB compared with receiver-only equalization based on Volterra nonlinear equalization (VNLE) and joint equalization based on a PDLUT and a feed-forward equalizer, respectively. Moreover, the number of multiply–accumulate operations consumed by the transmitter and receiver in the end-to-end learning scheme is reduced by 75.7% compared with VNLE-based receiver-only equalization.