Nowadays, medical imaging has become an indispensable tool for the diagnosis of some pathologies and as a health prevention instrument. In addition, medical images are transmitted over all types of computer networks, many of them insecure or susceptible to intervention, making sensitive patient information vulnerable. Thus, image watermarking is a popular approach to embed copyright protection, Electronic Patient Information (EPR), institution information, or other digital image into medical images. However, in the medical field, the watermark must preserve the quality of the image for diagnosis purposes. In addition, the inserted watermark must be robust both to intentional and unintentional attacks, which try to delete or weaken it. This work presents a bio-inspired watermarking algorithm applied to retinal fundus images used in computer-aided retinopathy diagnosis. The proposed system uses the Steered Hermite Transform (SHT), an image model inspired by the Human Vision System (HVS), as a spread spectrum watermarking technique, by leveraging its bio-inspired nature to give imperceptibility to the watermark. In addition, the Singular Value Decomposition (SVD) is used to incorporate the robustness of the watermark against attacks. Moreover, the watermark is embedded into the RGB fundus images through the blood vessel patterns extracted by the SHT and using the luma band of Y’CbCr color model. Also, the watermark was encrypted using the Jigsaw Transform (JST) to incorporate an extra level of security. The proposed approach was tested using the image public dataset MESSIDOR-2, which contains 1748 8-bit color images of different sizes and presenting different Diabetic Retinopathy (DR). Thus, on the one hand, in the experiments we evaluate the proposed bio-inspired watermarking method over the entire MESSIDOR-2 dataset, showing that the embedding process does not affect the quality of the fundus images and the extracted watermark, by obtaining average Peak Signal-to-Noise Ratio (PSNR) values higher to 53 dB for the watermarked images and average PSNR values higher to 32 dB to the extracted watermark for the entire dataset. Also, we tested the method against image processing and geometric attacks successfully extracting the watermarking. A comparison of the proposed method against state-of-the-art was performed, obtaining competitive results. On the other hand, we classified the DR grade of the fundus image dataset using four trained deep learning models (VGG16, ResNet50, InceptionV3, and YOLOv8) to evaluate the inference results using the originals and marked images. Thus, the results show that DR grading remains both in the non-marked and marked images.
Read full abstract