Abstract

ObjectiveChondrocyte viability (CV) can be measured with the label-free method using second harmonic generation (SHG) and two-photon excitation autofluorescence (TPAF) imaging. To automate the image processing for the label-free CV measurement, we previously demonstrated a two-step deep-learning method: Step 1 used a U-Net to segment the lacuna area on SHG images; Step 2 used dual CNN networks to count live cells and the total number of cells in extracted cell clusters from TPAF images. This study aims to develop one-step deep learning methods to improve the efficiency of CV measurement. MethodTPAF/SHG images were acquired simultaneously on cartilage samples from rats and pigs using two-photon microscopes and were merged to form RGB color images with red, green, and blue channels assigned to emission bands of oxidized flavoproteins, reduced forms of nicotinamide adenine dinucleotide, and SHG signals, respectively. Based on the Mask R-CNN, we designed a deep learning network and its denoising version using Wiener deconvolution for CV measurement. ResultsUsing training and test datasets from rat and porcine cartilage, we have demonstrated that Mask R-CNN-based networks can segment and classify individual cells with a single-step processing flow. The absolute error (difference between the measured and the ground-truth CV) of the CV measurement using the Mask R-CNN with or without Wiener deconvolution denoising reaches 0.01 or 0.08, respectively; the error of the previous CV networks is 0.18, significantly larger than that of the Mask R-CNN methods. ConclusionsMask R-CNN-based deep-learning networks improve efficiency and accuracy of the label-free CV measurement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call