In most of the medical treatments, intravenous catheterization is considered as a first crucial phase, in which health practitioners find the superficial vein to conduct blood sampling or medication procedures. In some patients these veins are hard to localize due to different physiological characteristics such as dark skin tone, scars, vein depth etc., which mostly results in multiple attempts for needle insertion. This causes pain, delayed treatment, bleeding, and even infections. To reduce these risks, an automated veins detection method is needed that can efficiently segment the veins and produce realistic results for cannulation purposes. For this purpose, many imaging modalities such as Photoacoustic, Trans-illumination, ultrasound, Near-Infrared etc. are used. Among these modalities Near-Infrared (NIR) imaging modality is considered most suitable due to its lower cost and non-ionizing nature. Over the past few years, subcutaneous veins localization using NIR have attracted increasing attention in the field of health care and biomedical engineering. Therefore, the proposed research work is based on NIR images for forearm subcutaneous veins segmentation. This paper presents a deep learning-based approach called Generative Adversarial Networks (GAN) for segmentation/localization of forearm veins. GANs have shown exciting results in the medical imaging field recently. These are used for unsupervised feature learning and image-to-image translation applications. These networks generate realistic results by learning data mapping from one state to another. Since GANs can produce state of the art results, therefore we have proposed a Pix2Pix GAN for segmentation of forearm veins. The proposed algorithm is trained and tested on forearm subcutaneous veins image dataset. The proposed model outperforms traditional approaches with the mean accuracy and sensitivity, values obtained are 0.971 and 0.862 respectively. The dice coefficient and Intersection over Union (IoU) score are respectively 0.962 and 0.936 which are better than the state-of-the-art methods.
Read full abstract