Nonsystematic search algorithms seem, in general, to be well suited to large-scale problems with many solutions. However, they tend to perform badly for problems with few solutions, and they cannot be used for insoluble problems, since they are incomplete. Here we present a new algorithm, i>learn-SAT, that, although based on nonsystematic search, is complete. Completeness is realized through a process of no-good learning, learning-by-merging. This requires exponential space in the worst case. We show, nevertheless, that i>learn-SAT performs very well on certain SAT problems that are tightly constrained or insoluble. Indeed, its performance generally approximates the best SAT algorithms and does much better at lower clause densities. i>Learn-SAT also maintains much of the efficient performance of nonsystematic search for large-scale problems with many solutions, at least relative to backtrack search algorithms. These results indicate that the burden on memory, imposed by no-good learning, is not generally a problem for i>learn-SAT. This is perhaps surprising in view of previous work. What is even more surprising is the scalability of i>learn-SAT. For some types of problem it scales very much better than the nearest competitive algorithm. There are other types, however, for which this is not the case. The performance profile of i>learn-SAT emerges from an experimental methodology related to the one outlined by Mammen and Hogg in 1997.