The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables. Program summaryProgram title: Random Grid SearchProgram Files doi:http://dx.doi.org/10.17632/mpcrnd7xb4.1Licensing provisions: GNU General Public License 3 (GPL)Programming language: c++, pythonNature of problem: We address the problem of scanning a large number of thresholds (cuts) on discriminating variables in order to find ones that maximize some measure of the degree of discrimination between classes of objects, for example, between signal and background events at the Large Hadron Collider (LHC).Solution method: The cuts searched are determined by the distribution of the objects that are the focus of an analysis. For example, if one is searching for supersymmetric events at the LHC, the cuts are determined by the predicted distribution of the variables that discriminate between the supersymmetric signal and the standard model background. In effect, we search for cuts using importance sampling determined by the signal distribution, thereby mitigating the curse of dimensionality.Additional comments including restrictions and unusual features: For cases with exceptionally large numbers of events, the program may take several hours to run. However, the system can be trivially parallelized, with no change to the program, by splitting large files into N smaller files and running the same set of cuts over the N files. The counts associated with each cut can then be summed over the N files.