Blend & Predict: Domain-Adaptable Few-Shot Learning for Microscopy Imaging
Accurate classification of microscopy images is critical for the analysis of biological samples. The availability of large-scale labeled datasets has contributed to recent progress in training large, deep classification models in the medical imaging domain, but methods that cater to a variety of microscopy modalities across a range of biological samples and length scales are scarce. A key reason is that curating labeled data for microscopy images is costly and needs tedious and timeconsuming effort of AI and domain experts. We propose a novel few-shot learning technique, specifically “Blend & Predict” that uses small labeled datasets for training and infers unlabeled datasets. We evaluated the performance and generalizability of our approach using three medical image datasets, each with a different microscopy modality and addressing a different biomedical question on different samples. We achieved results comparable to state-of-the-art models like GoogleNet, VGG16, RestNet50 that used large datasets for training.