We present a method that, given two different views of the same scene taken by two cameras with unknown settings and internal parameters, corrects the colors of one of the images making it look as if it was captured under the other camera settings. Our method is able to deal with any standard non-linear encoded images (gamma-corrected, logarithmic-encoded, or any other) without requiring any previous knowledge of the encoding.To this end, our method makes use of two important observations. First, the camera imaging pipeline from RAW to sRGB can be well approximated by considering just a per-pixel shading and a color transformation matrix, and second, for correcting the images we only need to estimate a single matrix –that will contain information from both of the original images– and an approximation of the shading term (that emulates the non-linearity).Our proposed method is fast and the results have no spurious artifacts. The method outperforms the state-of-the-art when compared with other methods that do not require knowledge of the encoding used. It is also able to compete with –and even surpass in some cases– methods that consider information about image encoding.