Sequential adaptive sampling strategies attempt to efficiently and accurately generate surrogate model (SM) training datasets by limiting the number of required function evaluations. Among these techniques, those that are model independent have attracted attention during the last decade because they do not use a global SM to supervise the sampling process, thereby relieving the user from the burden of selecting an appropriate SM formulation at the beginning of the sampling process. In this study, we propose a new model-independent sequential adaptive sampling technique called nearest neighbors adaptive sampling (NNAS). The NNAS formulation introduces a refinement metric based on local linear models that leads to a quasilinear dependency of the algorithm complexity with respect to the number of samples. Additionally, the exploration-refinement balance is achieved by a stochastic Pareto-ranking-based selection criterion that attempts to simultaneously maximize both the exploration and refinement. NNAS is tested on ten two-dimensional and one ten-dimensional analytic functions, a five-dimensional engineering test case based on the XRotor solver (Drela and Youngren 2014), and a two-response two-dimensional analytic problem. The results show that NNAS is more computationally efficient than comparable methods without causing a relevant increase in sample requirement.