More and more applications have come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n -Dependence Estimators (A n DE) allows for flexible learning from out-of-core data, by varying the value of $n$ (number of super parents). Hence, A n DE is especially appropriate for large data learning. In this paper, we propose a sample-based attribute selection technique for A n DE. It needs one more pass through the training data, in which a multitude of approximate A n DE models are built and efficiently assessed by leave-one-out cross validation. The use of a sample reduces the training time. Experiments on 15 large data sets demonstrate that the proposed technique significantly reduces A n DE's error at the cost of a modest increase in training time. This efficient and scalable out-of-core approach delivers superior or comparable performance to typical in-core Bayesian network classifiers.