Received signal strength (RSS) has been one of the most used observables for location purposes due to its availability at almost every wireless device. However, the volatile nature of RSS tends to yield to non-reliable location solutions. IEEE 802.11mc enabled the use of the round trip time (RTT) for positioning, which is expected to be a more consistent observable for location purposes. This approach has been gaining support from several companies such as Google, which introduced that feature in the Android O.S. As a result, RTT estimation is now available in several recent off-the-shelf devices, opening a wide range of new approaches for computing location. However, RTT has been traditionally addressed to multilateration solutions. Few works exist that assess the feasibility of the RTT as an accurate feature in positioning methods based on classification algorithms. An attempt is made in this paper to fill this gap by investigating the performance of several classification models in terms of accuracy and positioning errors. The performance is assessed using different AP layouts, distinct AP vendors, and different frequency bands. The accuracy and precision of the RTT-based position estimation is always better than the one obtained with RSS in all the studied scenarios, and especially when few APs are available. In addition, all the considered ML algorithms perform pretty well. As a result, it is not necessary to use more complex solutions (e.g., SVM) when simpler ones (e.g., nearest neighbor classifiers) achieve similar results both in terms of accuracy and location error.