Abstract

As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that two types of neural-networks, You Only Look Once (YOLOv3, YOLOv5) and Faster R-CNN, can be trained on a dataset which comprises of droplets generated across several microfluidic experiments and systems. The latitude of droplets used for training and validation, produce model weights which are easily transitive to emulsion systems at large, while completely circumventing any necessity of manual feature extraction. In flow cell experiments which comprised of greater than either 10,000 mono- or polydisperse droplets, the models show excellent or superior statistical symmetry to classical implementations of the Hough transform or widely utilized ImageJ plugins. In more complex chip architectures which simulate porous media, the produced image data typically requires heavy pre-processing to extrapolate valid data, where the models were able to handle raw input and produce size distributions with accuracy of ± 2μm for intermediate magnifications. This data harvesting fidelity is extended to foreign datasets not included in the training such as micrograph observation of various emulsified systems. Implementing these neural networks as the sole feature extraction tools in these microfluidic systems not only makes the data pipelining more efficient but opens the door for live detection and development of autonomous microfluidic experimental platforms due to inference times of greater than 100 frames per second.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.