The present study proposes a new software program to help researchers identify rock paintings from digital images, rapidly producing high-quality documentation, in a user-friendly way. The three RGB colour channels of the digital image are first decorrelated and then stretched, a well-known technique used by remote-sensing specialists for over thirty years. In contrast with the approaches previously developed specifically for rock art, several data-whitening algorithms are used at this step: (regular) principal component analysis, zero-phase component analysis, Cholesky decomposition, and independent component analysis. These transformations produce different arrangements of the colour information, which nevertheless share some important properties (e.g. the covariance matrix of the new channels equals the identity matrix). The decorrelated data, previously stretched and scaled to fit the RGB space, are then converted into various colour spaces (selected from among the most popular): XYZ, HLS, HSV, LAB (CIELAB), Luv, CMY(K), YCrCb, and YUV. The most subtle colour variations will be better perceived in some of these newly produced, contrasted, false-coloured images. The researcher can then take advantage of supervised machine learning algorithms to isolate painted figures. At this step, binary pixel classification is performed either by logistic regression, support vector machine, or k -nearest neighbours, possibly including confident learning. There is no need for complex tuning at any point during the procedure, which lasts a few minutes at most, while a posteriori cleaning of the produced document is minimal. The software, written in Python, is provided as a stand-alone executable program for Windows, for broader diffusion, and as open-source code, which can therefore be adapted to the evolving needs of the community.