Abstract

In drug development, image-based bioassays are commonplace, typically run in high throughput on automated microscopes. The resulting cell imaging data comes from multiple instruments and has been acquired at different time points, leading to technical and biological variation in the data, potentially hampering the quantitative analysis across an assay campaign. In this work, we analyze the robustness of a novel concept called Vision Transformers with respect to technical and biological variations. We compare their performance to recent analysis concepts by benchmarking the Cells Out of Sample dataset (COOS) from a high-content imaging screen. The experiments suggest that Vision Transformers are capable of learning more robust representations, thereby even outperforming specially designed deep learning architectures by a large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call