Event Abstract Back to Event Recording a large population of retinal cells with a 252 electrode array and automated spike sorting Olivier Marre1*, Dario Amodei1, Frederick Soo1, Timothy E. Holy2 and Michael Berry1 1 Princeton University, United States 2 Washington University in St. Louis, United States Recent theoretical work has suggested that recording the activity of more than 100 neurons in the retina simultaneously might uncover non-trivial collective behavior [1]. Furthermore, understanding the neural code of the retina requires access to the information sent to the brain about a large region of the visual space. For that purpose, we used a dense array of 252 electrodes to record activity in the ganglion cell layer of the salamander retina. The electrode density, which is close to the cell density, has been shown to be high enough to record from nearly all the ganglion cells in a patch of retina for smaller arrays [2]. The large number of electrodes precludes doing spike sorting by hand. We thus designed a highly automated algorithm to extract spikes from these raw data. The algorithm was composed of two main steps: 1) a "template-finding" phase to extract the cell’s templates, i.e. the pattern of activity evoked over many electrodes when one ganglion cell fires an action potential; 2) a "fitting" phase where the templates were matched to the raw data to find the location of the spikes. For the template-finding phase, we started by detecting all the possible times in the raw data that could contain a spike. Using the minima and maxima values in the neighborhood of the spike on each electrode, spikes were clustered into groups. We then extracted the template corresponding to each group by a least-square fitting method. In the fitting phase, we matched the templates to the raw data with a method that allowed amplitude variation for each template. For that purpose, we selected the best-fitting template, and decide to include it in the match or not according to a criterion that compared the fitting improvement with a cost function. This latter aimed at forcing the spike amplitudes to be close to 1, and imposing a sparseness constraint corresponding to the fact that the overlapping of many spikes is highly unlikely. This process was then iterated to match additional templates to the raw data. Since a first pass clustering did not capture all the cell’s templates, we then repeated these two steps. After the fitting part, we did another clustering by taking the minima and the maxima for each putative spike after having subtracted the surrounding contribution of the other templates. This improved clustering made possible the extraction of new templates, then leading to better fits to the raw data. This alternation of clustering and matching was then run iteratively until no additional templates were found. We tested our algorithm by generating surrogate data where we add an artificial template to the data, and trying to recover these artificial events with the algorithm. The ratio of events recovered could reach 99 %when the template was successfully extracted.
Read full abstract