Abstract

Organoid cultures are proving to be powerful in vitro models that closely mimic the cellular constituents of their native tissue. Organoids are typically expanded and cultured in a 3D environment using either naturally derived or synthetic extracellular matrices. Assessing the morphology and growth characteristics of these cultures has been difficult due to the many imaging artifacts that accompany the corresponding images. Unlike single cell cultures, there are no reliable automated segmentation techniques that allow for the localization and quantification of organoids in their 3D culture environment. Here we describe OrgaQuant, a deep convolutional neural network implementation that can locate and quantify the size distribution of human intestinal organoids in brightfield images. OrgaQuant is an end-to-end trained neural network that requires no parameter tweaking; thus, it can be fully automated to analyze thousands of images with no user intervention. To develop OrgaQuant, we created a unique dataset of manually annotated human intestinal organoid images with bounding boxes and trained an object detection pipeline using TensorFlow. We have made the dataset, trained model and inference scripts publicly available along with detailed usage instructions.

Highlights

  • The obtained images suffer from numerous imaging artifacts that make conventional image processing techniques extremely difficult

  • Object detection and localization is a complex problem in computer vision applications

  • Faster Region Convolutional Neural Network (R-CNN) can use a detection model based on several different architectures including ResNet 10123 and Inception v224

Read more

Summary

Introduction

The obtained images suffer from numerous imaging artifacts that make conventional image processing techniques extremely difficult. Measuring and counting these organoids is a very inefficient process as typically there are hundreds of images that need to be quantified with tens to hundreds of organoids per image. Borten et al released an elegant open-source software package, OrganoSeg[17], that addresses some of these challenges, but still relies on conventional image processing techniques and requires tweaking of multiple parameters for any given set of images with similar optical conditions. Based on the idea of transfer learning[20], we take a pre-trained neural network and further train it on organoid images to achieve very high precision results in drawing a bounding box around each organoid. A ready-to-run cloud implementation is available on www.scipix.io

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call