Terrestrial laser scanners (TLS) are contact-free measuring sensors that record dense point clouds of objects or scenes by acquiring coordinates and an intensity value for each point. The point clouds are scattered and noisy. Performing a mathematical surface approximation instead of working directly on the point cloud is an efficient way to reduce the data storage and structure the point clouds by transforming “data” to “information”. Applications include rigorous statistical testing for deformation analysis within the context of landslide monitoring. In order to reach an optimal approximation, classification and segmentation algorithms can identify and remove inhomogeneous structures, such as trees or bushes, to obtain a smooth and accurate mathematical surface of the ground. In this contribution, we compare methods to perform the classification of TLS point clouds with the aim of guiding the reader through the existing algorithms. Besides the traditional point cloud filtering methods, we will analyze machine learning classification algorithms based on the manual extraction of point cloud features, and a deep learning approach with automatic extraction of features called PointNet++. We have intentionally chosen strategies easy to implement and understand so that our results are reproducible for similar point clouds. We show that each method has advantages and drawbacks, depending on user criteria, such as the computational time, the classification accuracy needed, whether manual extraction is performed or not, and if prior information is required. We highlight that filtering methods are advantageous for the application at hand and perform a mathematical surface approximation as an illustration. Accordingly, we have chosen locally refined B-splines, which were shown to provide an optimal and computationally manageable approximation of TLS point clouds.