Synthetic aperture sonar imagery is typically generated using data collected with unmanned underwater vehicles. The prohibitive cost of collecting underwater data and the need for well-controlled factors such as collection geometry and object configuration has provided the motivation for devising a benchtop in-air circular acoustic data collection framework. This set-up makes it practically feasible to explore a multitude of parameters that are not as feasible with underwater measurement scenarios, including waveform type, object shapes and material. It is also practically feasible to explore various representations of the collected acoustic data that help better emphasize different aspects of the information embedded in the acoustic signal, which various machine learning algorithms can utilize. Signal processing and feature organization are critical to improving performance of machine learning algorithms. For example, geometric scattering response of objects is well-represented in spatial imagery with sharp contrast of pixel intensity between the object and surrounding environment, while spatial spectrum of the complex SAS image better represents the aspect-dependent spectral response of the object that help discriminate objects of the same shape, but with different material. We will discuss the relationship between the choice of representation and discriminatory information with illustrative classification problems.