Artificial intelligence (AI) can be a useful tool to gather intelligence from remote sensing data; it helps make sense of synthetic aperture radar (SAR) data via discovery and exploitation. The challenge of utilizing AI in SAR applications is obtaining (large enough) comprehensive sets of labeled training data because SAR data has significant variation across sensor-related characteristics, across processing parameters, and across the different collection plans. This work evaluates the impact of SAR satellite imagery variations on classification accuracy, and demonstrates this by classifying pixels of SAR imagery into land, water, and ship for varying conditions (area-of-interest, incidence angle, spatial resolution, etc.). Results showed that variations in the area-of-interest (AOI), incidence angle, and spatial resolution impacted the classification results obtained using an artificial neural network (ANN). This work also demonstrated that ANNs trained on SAR imagery can be used to infer training data labels of other SAR imagery obtained from different conditions, provided that the changes in condition produced less than a 5% classification error or increased class separation for some (or all) of the classes being discriminated.