Abstract

Experimental research such as cell cultures and carbon nanotube (CNT) growth are largely governed by following predefined execution protocols with fine-tuned control of parameters. There are promising opportunities to apply reinforcement learning (RL), an established learning technique in the area of artificial intelligence, in order to automate CNT growth process and accelerate related scientific breakthroughs in material discovery. Although there are benefits in RL-based exploration and exploitation methodologies, there are also challenges in developing relevant learning policies in experimental settings relating to CNT growth. In this paper, we present a novel data-driven RL approach for assisting experimental CNT growth. Our approach focuses on developing an RL model to learn from simulation-based images and characteristics of temporal CNT growth, considering various growth parameters. Our RL model learns from CNT growth variation in a simulation-based environment where critical control parameters, such as density, growth rate, tube radius, tube stiffness, and Van der Waals forces, are used. Our RL model enables CNT growth automation in order to explore of a wider range of growth conditions, and improve reproducibility. The ultimate goal of our RL model is to achieve desired CNT growth by dynamically controlling growth parameters throughout a sequence of experiments. We evaluate the effectiveness of our RL approach by measuring the improvement in the maximum compressive strength of a carbon nanotube ‘with’ and ‘without’ the RL model. Our results show the effectiveness of course correction recommended by our RL approach when controlling the parameters of angular deviation and rate of growth of carbon nanotubes, when compared against non-regulated CNT growth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call