Abstract

In the era of digitized images, the goal is to extract information from them and create new knowledge thanks to Computer Vision techniques, Machine Learning and Deep Learning. This enables the use of images for early diagnosis and subsequent treatment of a wide range of diseases. In the dermatological field, deep neural networks are used to distinguish between melanoma and non-melanoma images. In this paper, we have underlined two essential points of melanoma detection research. The first aspect considered is how even a simple modification of the parameters in the dataset determines a change of the accuracy of classifiers. In this case, we investigated the Transfer Learning issues. Following the results of this first analysis, we suggest that continuous training-test iterations are needed to provide robust prediction models. The second point is the need to have a more flexible system architecture that can handle changes in the training datasets. In this context, we proposed the development and implementation of a hybrid architecture based on Cloud, Fog and Edge Computing to provide a Melanoma Detection service based on clinical and dermoscopic images. At the same time, this architecture must deal with the amount of data to be analyzed by reducing the running time of the continuous retrain. This fact has been highlighted with experiments carried out on a single machine and different distribution systems, showing how a distributed approach guarantees output achievement in a much more sufficient time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call