Abstract

Automated cell segmentation and tracking can significantly increase the productivity of research in biology. In order to tune a tracking system for a particular video, researchers usually have to manually annotate a part of the video, and tune the algorithm with respect to this ground truth. However, large variability in cell video characteristics leads to different trackers and parameters being optimal for different videos. Therefore for any new video, manual annotation and tuning has to be performed again. Alternatively, suboptimal parameters have to be used which may result in a significant amount of manual post-correction being required. The challenge that we address in this paper is automated selection and tuning of cell tracking systems without the need for manual annotation. Given an estimate of the cell size only, our method is capable of ranking the trackers according to their performance on the given video without the need for ground truth. Our evaluation using real videos and real tracking systems indicates that our method is capable of selecting the best or nearly best tracker and its parameters in practical scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call