Sand (2,000–50 µm), silt (50–2 µm) and clay (<2 µm) have been determined by sedimentation methods (i.e., hydrometer and pipette), but increasingly laser diffraction and visible and near-infrared (vis-NIR) or mid-infrared (MIR) spectroscopy are being used. It is necessary to understand the comparability and limitations of these methods. We compared these different methods using 121 soil samples from the Central Sand Plains of Wisconsin, for total sand content as well as five sand fractions (very coarse, coarse, medium, fine, and very fine). Spectroscopic methods are not hindered by assumptions of sphericity or constant particle density; yet these model-based methods require calibration by laboratory methods that do rely on these assumptions. Vis-NIR and MIR spectroscopic model predictions of total sand content differed when calibration laboratory data were obtained from the hydrometer and pipette method. The laser method was not able to accurately measure sand fractions and the fractions were poorly to moderately predicted by vis-NIR (R2 = 0.01 to 0.61) but slightly better using MIR (R2 = 0.23 to 0.71). The hydrometer method overestimated the total sand content compared to the pipette; the laser method was only moderately correlated to both pipette and hydrometer. Spectroscopic modeling of total sand content was very good (R2 = 0.91 to 0.94), but results were affected by the laboratory calibration method and models calibrated with reference hydrometer data overestimated total sand content. We conclude that MIR predictions of sand content provide the best results, but models must be calibrated with the pipette method for soils with high sand contents.