Abstract
Here, a streamlined, scalable, laboratory approach is discussed that enables medium-to-large dataset analysis. The presented approach combines data management, artificial intelligence, containerization, cluster orchestration, and quality control in a unified analytic pipeline. The unique combination of these individual building blocks creates a new and powerful analysis approach that can readily be applied to medium-to-large datasets by researchers to accelerate the pace of research. The proposed framework is applied to a project that counts the number of plasmonic nanoparticles bound to peripheral blood mononuclear cells in dark-field microscopy images. By using the techniques presented in this article, the images are automatically processed overnight, without user interaction, streamlining the path from experiment to conclusions.
Highlights
Advances in computer technology, data acquisition hardware, laser technology, and automated imaging platforms have transformed the field of biomedical research
We propose a data processing framework based on deep learning, containerization and orchestration for the analysis of medium to large datasets
We present a concrete example of the presented concepts: a novel image processing pipeline that, from 3D microscopy image stacks, extract cells and cell contours using deep convolutional neural networks and analyzes individuals cell using computer vision image processing techniques
Summary
High throughput acquisition devices leave researchers with a deluge of data. Automated processing is often a challenging task. We propose a data processing framework based on deep learning, containerization and orchestration for the analysis of medium to large datasets. The framework is applied to microscopy images of cells
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have