Abstract
PremiseX‐ray microcomputed tomography (microCT) can be used to measure 3D leaf internal anatomy, providing a holistic view of tissue organization. Previously, the substantial time needed for segmenting multiple tissues limited this technique to small data sets, restricting its utility for phenotyping experiments and limiting our confidence in the inferences of these studies due to low replication numbers.Methods and ResultsWe present a Python codebase for random forest machine learning segmentation and 3D leaf anatomical trait quantification that dramatically reduces the time required to process single‐leaf microCT scans into detailed segmentations. By training the model on each scan using six hand‐segmented image slices out of >1500 in the full leaf scan, it achieves >90% accuracy in background and tissue segmentation.ConclusionsOverall, this 3D segmentation and quantification pipeline can reduce one of the major barriers to using microCT imaging in high‐throughput plant phenotyping.
Highlights
PREMISE: X-ray microcomputed tomography can be used to measure 3D leaf internal anatomy, providing a holistic view of tissue organization
We present a Python codebase for random forest machine learning segmentation and 3D leaf anatomical trait quantification that dramatically reduces the time required to process single-leaf microcomputed tomography (microCT) scans into detailed segmentations
The following pipeline was built for our projects using X-ray synchrotron-based microCT imaging and uses the freely available and open source software ImageJ (Schneider et al, 2012)
Summary
The following pipeline was built for our projects using X-ray synchrotron-based microCT imaging and uses the freely available and open source software ImageJ (Schneider et al, 2012). The same filters are applied to a map of the distance from the top and bottom edge of the image to its center, and to gridrec and phase-contrast slices that have been Sobel filtered to emphasize edges These feature layer arrays are used, along with the local thickness map, to train the random forest classification model by predicting pixel values on the desired number of hand-labeled training slices, which are randomly selected within the full hand-labeled stack. In the mesophyll cell class, recall was generally >90% even when training on fewer than three manually segmented slices; this means that >90% of all mesophyll cell pixels were correctly identified as cells, suggesting the trained random forest model is highly sensitive to cells. Using the testing procedure presented here on scans from previous scanning endeavors could help guide future microCT setups to acquire and extract high-quality biological data
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.