Abstract

Event Abstract Back to Event Automated Identification of Axonal Action Potentials in high-density CMOS Microelectrode Array Recordings David Hoffmann1, 2*, Miriam Reh1, 2 and Günther Zeck1 1 Natural and Medical Sciences Institute, Neurophysics, Germany 2 Graduate Training Centre of Neuroscience, International Max Planck Research School, Germany When recording or modulating activity in neural tissue, such as the retina, it is of great advantage to obtain online information about cell location, about their axons and putative synaptically connected partners [1]. In this work we present and evaluate an algorithm for automated classification of retinal ganglion cells and identification of their axons based on recordings with high-density CMOS-based MEAs (CMOS MEA 5000, MultiChannelSystems MCS GmbH) from ex vivo mouse retina. The algorithm, which is based on a convolutional neural network (CNN) computes spike triggered average (STAs) electrical image and performs a classification of the electrical images during the experiment. This may enable the user to selectively stimulate a cell at a specified position. Material and Methods Spike sorting and computation of STAs was done using the software CMOS-MEA-TOOLs, which is based on an cICA algorithm [2]. This software visualizes the STAs as short videos, which are used here for producing a ground truth data set. A total of 1309 neurons were hand labelled and used to train the CNN classifier. Input to the CNN are feature maps of the STAs, namely a Boolean indicating threshold crossing for each electrode of the MEA, the mean maximum cross-correlation and the minimal voltage values per electrode. For a single electrode i the mean maximum cross-correlation is calculated by computing the maximum cross-correlation of this electrode with all 8 adjacent electrodes and calculating the mean over these 8 values. The algorithm was implemented in python and the resulting package, called axonFinder, comes with methods for semi-automatic labelling, to increase the amount of available training data. With more training data the classification accuracy can be expected to increase further. Results The mean accuracy of the CNN over all data sets (n=15) is 0.76 ± 0.1 (mean ± standard deviation), which is somewhat lower than human accuracy 0.84 ± 0.09. Human accuracy was determined by one human expert classifying the cells based on the input feature maps of the CNN, each visualized as image. For 6 out of 15 data sets the CNN and the human accuracy is indistinguishable. We found that the true positive rate (classifying a cell correctly as RGC comprising an axon) is very similar between human (0.74 ± 0.15) and CNN (0.74 ± 0.16). Differences are observed for true negative rate. The lower accuracy of the CNN can partially be explained by occasional failure of the optimization, which is a result of the small training and, by chance, ill-chosen early stopping data set. Some of these problems can be solved using N-fold cross-validation. Indeed, we found that usage of N-fold cross-validation for training increases the accuracy of the CNN to 0.8 ± 0.07. Conclusion Here we demonstrated how axonal action potentials are automatically identified in CMOS MEA recordings. The method can be used offline for unequivocal separation of ganglion cells from possible displaced amacrine cells and online to select electrical stimulus positions. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgements This work is partially funded by the German Ministry for education and Research (BMBF, FKZ: 031L0059A). DH acknowledges support by the Graduate Training Centre Neuroscience, Tübingen. We thank Larissa Höfling and Florian Jetter for sharing their recorded data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call