Abstract

AbstractApplications in engineering biology increasingly share the need to run operations on very large numbers of biological samples. This is a direct consequence of the application of good engineering practices, the limited predictive power of current computational models and the desire to investigate very large design spaces in order to solve the hard, important problems the discipline promises to solve. Automation has been proposed as a key component for running large numbers of operations on biological samples. This is because it is strongly associated with higher throughput, and with higher replicability (thanks to the reduction of human input). The authors focus on replicability and make the point that, far from being an additional burden for automation efforts, replicability should be considered central to the design of the automated pipelines processing biological samples at scale—as trialled in biofoundries. There cannot be successful automation without effective error control. Design principles for an IT infrastructure that supports replicability are presented. Finally, the authors conclude with some perspectives regarding the evolution of automation in engineering biology. In particular, they speculate that the integration of hardware and software will show rapid progress, and offer users a degree of control and abstraction of the robotic infrastructure on a level significantly greater than experienced today.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call