In this study, a novel approach has been introduced for face recognition that is unaffected by changes in illumination. This method is based on the reflectance-luminance model and incorporates local matching using a weighted voting technique to eliminate any artifacts present in the retina images. A total of 37 different linear and nonlinear filters were tested, including high pass and low pass filters, to achieve the reflectance component of the image, which remains invariant to changes in illumination. Among these filters, the maximum filter, which is a simple filter with low computational complexity, yielded the best results in extracting the illumination invariants. It was observed that the illumination invariants obtained through this method outperformed other methods such as QI, SQI, and image enhancement techniques in terms of recognition accuracy. Importantly, the proposed method does not require any prior knowledge about the facial shape or illumination conditions and can be applied to individual images independently. Unlike many existing methods, this approach does not rely on multiple images during the training stage and does not require any parameter selection to generate the illumination invariants. To further enhance the robustness of illumination, a weighted voting system was introduced. Certain regions of the image, which may adversely affect the recognition outcome due to poor illumination, occlusion, noise, or lack of distinctive information, were identified using predefined factors such as grayscale mean, image entropy, and mutual information. The proposed method was also compared to other face recognition methods in the presence of occlusions, and it demonstrated promising results outperforming existing methods. The Python algorithm successfully detects obstructed faces, genders, and ages in videos with a face matching accuracy between 80.9% to 96.9% based on proximity.