Abstract
There are numerous efforts to automate the delineation of rat brain regions from rat brain histology. A leading approach uses convolutional neural networks which model anatomical variability and determine cytoarchitectonic boundaries. Currently, it is not clear what scale of the input tissue images offers the most information for these models to exploit. In this work, we test a fully convolutional architecture, U–Net, with Nissl–stained rat brain tissue images of different scales. We show that the networks obtain a lower precision and higher recall when trained on large-scale images. Conversely, networks trained with small-scale images produce fewer false-positive predictions and more false-negative predictions. Our work provides valuable insight into the optimal scale needed for convolutional neural networks to segment brain regions from Nissl-based images of the brain.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have