Abstract

The hullabaloo surrounding the recent successes of deep learning in image processing often fall short of the results obtained when applied to real datasets. Functional accuracies and reliable predictions are possible, but not without a mindful approach to pipeline development.Building machine learning pipelines that combine the various technologies available to today’s data scientist in a robust and repeatable manner is the core requirement when deploying automated image processing software. But with so many options, how can we ensure our data pipeline is accurate and that our deep tech is reliable?Chris will give an overview of the development of a protein crystal image classification pipeline that has been autonomously deployed at CSIRO’s Collaborative Crystallisation Centre. Chris will speak about the practicalities of training data and touch briefly on the problem of continuous learning for improving model accuracy over time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call