Abstract

Traditional microscopes have been used a long time in hematology for blood analysis, but during the last decade many laboratories have started to replace them with digital microscope systems. The appearance of blood cells in the digital images is very important to the end user, ideally they should be identical to how they would look in a traditional microscope. There are several digital microscope systems on the market today with various optics and illumination, which means that images from different systems do not look the same. This is a cumbersome problem in many ways. For example this means cell classification networks need to be trained for every single system. In this paper we investigate the possibility of using deep learning to transform images between digital systems. The main focus is on a cyclic network setup where it is possible to transform the images between two systems. We present two different networks, a cyclic network with a perceptual loss based on the VGG-16 network and a conditional version of a cyclic generative adversarial network (GAN). With these networks we obtain very good results that are better than previous methods for transforming blood cell images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call