In the near future, NASA's Earth Observing System (EOS) plarforms will produce enormous amounts of remote sensing image data that will be stored in the EOS Data Informations System. For the past several years, the Intelligent Data Management group at Goddard's Information Science and Technology Office/935 has been researching techniques for automatically cataloging and characterizing image data (ADCC) from EOS into a distributed database (Cromp, Campbell, & Short, 1992). The purpose of this work is to enable scientists to retrieve data based upon the contents of the imagery. The ability to automatically classify imagery is key to the success of contents-based search. We report results from experiments applying a novel, machine learning framework, based on Set Enumeration (SE) trees (Rymon, 1993), to the ADCC domain. Following the design of Chettri, Cromp, and Birmingham (1992), we experiment with two images: one taken from teh Blackhills region in South Dakota, the other from the Washington, DC area. In a classical, machine learning experimentation approach, an image's pixels are randomly partitioned into a training set (including ground truth or survey data) and a testing set. The prediction model is built using the pixels in the training set, and its performance is estimated using the testing set. With the first Blackhills image, we perform various experiments achieving an accuracy level of 83.2% compared to 72.7% reported by Chettri et al. using a Back Propagation Neural Network (BPNN), and 65.3% using a Gaussian Maximum Likelihood Classifier (GMLC). However, with the Washington, DC image, we were only able to achieve 71.4%, compared with 67.7% reported for the BPNN model, and 62.3% for the GMLC.