The dynamism envisioned in future high-capacity gridless optical networks requires facing several challenges in distortion mitigation, such as the mitigation of interchannel interference (ICI) effects in any optical channel without information of their adjacent channels. Machine learning (ML)-based techniques have been proposed in recent works to estimate and mitigate different optical impairments with promising results. We propose and evaluate two training strategies for supervised learning algorithms with the aim to minimize ICI effects in a gridless 3×16-Gbaud 16-quadrature amplitude modulation (QAM) Nyquist-wavelength-division multiplexing (WDM) system. One strategy, called updating strategy, is based on symbol training sequence, and the other one, called characterization strategy, is based on an offline training using a previous system characterization. Artificial neural networks (ANN), support vector machine (SVM), K-nearest neighbors (KNN), and extreme learning machine (ELM) algorithms are explored for both training strategies. Experimental results showed a bit error rate (BER) improvement at low training lengths for both training strategies, for instance, gains up to ∼4dB in terms of optical signal-to-noise ratio were achieved in a back-to-back scenario. Besides, the KNN and ELM algorithms showed significant BER reduction in transmission over 250 km optical fiber. Additionally, we carried out a brief computational complexity analysis where ELM presented only 1.9% of ANN processing time. Hence, the use of ML-based techniques could enhance the optical gridless networks performance and consequently fulfill future traffic demands.