Abstract

We present a trainable segmentation method implemented within the python package ParticleSpy. The method takes user labelled pixels, which are used to train a classifier and segment images of inorganic nanoparticles from transmission electron microscope images. This implementation is based on the trainable Waikato Environment for Knowledge Analysis (WEKA) segmentation, but is written in python, allowing a large degree of flexibility and meaning it can be easily expanded using other python packages. We find that trainable segmentation offers better accuracy than global or local thresholding methods and requires as few as 100 user-labelled pixels to produce an accurate segmentation. Trainable segmentation presents a balance of accuracy and training time between global/local thresholding and neural networks, when used on transmission electron microscope images of nanoparticles. We also quantitatively investigate the effectiveness of the components of trainable segmentation, its filter kernels and classifiers, in order to demonstrate the use cases for the different filter kernels in ParticleSpy and the most accurate classifiers for different data types. A set of filter kernels is identified that are effective in distinguishing particles from background but that retain dissimilar features. In terms of classifiers, we find that different classifiers perform optimally for different image contrast; specifically, a random forest classifier performs best for high-contrast ADF images, but that QDA and Gaussian Naïve Bayes classifiers perform better for low-contrast TEM images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call