Abstract

We describe a novel method to achieve a universal, massive, and fully automated analysis of cell motility behaviours, starting from time-lapse microscopy images. The approach was inspired by the recent successes in application of machine learning for style recognition in paintings and artistic style transfer. The originality of the method relies i) on the generation of atlas from the collection of single-cell trajectories in order to visually encode the multiple descriptors of cell motility, and ii) on the application of pre-trained Deep Learning Convolutional Neural Network architecture in order to extract relevant features to be used for classification tasks from this visual atlas. Validation tests were conducted on two different cell motility scenarios: 1) a 3D biomimetic gels of immune cells, co-cultured with breast cancer cells in organ-on-chip devices, upon treatment with an immunotherapy drug; 2) Petri dishes of clustered prostate cancer cells, upon treatment with a chemotherapy drug. For each scenario, single-cell trajectories are very accurately classified according to the presence or not of the drugs. This original approach demonstrates the existence of universal features in cell motility (a so called “motility style”) which are identified by the DL approach in the rationale of discovering the unknown message in cell trajectories.

Highlights

  • We describe a novel method to achieve a universal, massive, and fully automated analysis of cell motility behaviours, starting from time-lapse microscopy images

  • Cell motility is fundamental for life, along the entire evolutionary tree, being involved in bacteria collective motion[1], in the morphogenesis of pluricellular organisms[2], in adult physiological process[3] and in some pathologies[4,5,6,7]

  • We addressed the question of whether Deep Learning (DL) could be proficient in extracting the motility styles, i.e. the paintings drawn by cells while moving

Read more

Summary

Introduction

We describe a novel method to achieve a universal, massive, and fully automated analysis of cell motility behaviours, starting from time-lapse microscopy images. The first step of our Deep Tracking method relies on the assembly of the individual cell tracks collected for each video (in the range of hundreds) into single images (an atlas), which visually encode a variety of cell motility properties (Fig. 1C) Looking at these motility atlases, like when watching a painting, humans can perceive and measure some features, such as the directionality or speed of cell movements; we reasoned that machine learning[13] will be able to enormously expand the number of features within the atlases to be used for the recognition of motility styles. The second step of Deep Tracking method exploits DL architecture, by using the widely used pre-trained Convolutional Neural Network, AlexNET14 (see Steps 4 and 5 In Methods section), to extract these “unperceivable-to-humans” features from the visual atlases of each experiment and to use them to classify the cell motility behaviours (Fig. 1D), by implementing the so-called transfer learning[15]. Prostate cancer cells (PC-3 cell line) were treated or not with a standard chemotherapeutic agent (etoposide); these cells naturally form clusters in 2D Petri dishes and the collective motility behaviours within the clusters are inhibited by the drug[16]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.