Abstract

Synthetic microswimmers show great promise in biomedical applications such as drug delivery and microsurgery. Their locomotion, however, is subject to stringent constraints due to the dominance of viscous over inertial forces at low Reynolds number (Re) in the microscopic world. Furthermore, locomotory gaits designed for one medium may become ineffective in a different medium. Successful biomedical applications of synthetic microswimmers rely on their ability to traverse biological environments with vastly different properties. Here we leverage the prowess of machine learning to present an alternative approach to designing low Re swimmers. Instead of specifying any locomotory gaits \textit{a priori}, here a swimmer develops its own propulsion strategy based on its interactions with the surrounding medium via reinforcement learning. This self-learning capability enables the swimmer to modify its propulsion strategy in response to different environments. We illustrate this new approach using a minimal example that integrates a standard reinforcement learning algorithm ($Q$-learning) into the locomotion of a swimmer consisting of an assembly of spheres connected by extensible rods. We showcase theoretically that this first self-learning swimmer can recover a previously known propulsion strategy without prior knowledge in low Re locomotion, identify more effective locomotory gaits when the number of spheres increases, and adapt its locomotory gaits in different media. These results represent initial steps towards the design of a new class of self-learning, adaptive (or "smart") swimmers with robust locomotive capabilities to traverse complex biological environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call