Abstract

Combining materials science, artificial intelligence (AI) offers great potential for the extensive quantitative analysis and processing of material characterization associated with high-throughput experiments. However, due to the complex and diverse morphology of structural components, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications. Here, we present a universal self-supervised learning framework for microscopic images. Our framework learns generalizable representations from unlabelled images and provides a pixel-wise segmentation for quantitative microstructure analysis in a variety of materials science applications. Specifically, the framework learns feature from a single image by means of self-supervised learning, and adapts it to a series of related tasks. We show that our method consistently outperforms several comparisons supervised or weakly supervised learning models in the context of various applications. Our approach provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable practical AI applications from microscopic imaging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call