Neural networks enable the processing of large, complex data sets with applications in disease diagnosis, cell profiling, and drug discovery. Beyond electronic computers, neural networks have been implemented using programmable biomolecules such as DNA; this confers unique advantages, such as greater portability, electricity-free operation, and direct analysis of patterns of biomolecules in solution. Analogous to bottlenecks in electronic computers, the computing power of DNA-based neural networks is limited by the ability to add more computing units, i.e., neurons. This limitation exists because current architectures require many nucleic acids to model a single neuron. Each additional neuron compounds existing problems such as long assembly times, high background signal, and cross-talk between components. Here, we test three strategies to solve this limitation and improve the scalability of DNA-based neural networks: (i) enzymatic synthesis for high-purity neurons, (ii) spatial patterning of neuron clusters based on their network position, and (iii) encoding neuron connectivity on a universal single-stranded DNA backbone. We show that neurons implemented via these strategies activate quickly, with a high signal-to-background ratio and process-weighted inputs. We rewired our modular neurons to demonstrate basic neural network motifs such as cascading, fan-in, and fan-out circuits. Finally, we designed a prototype two-layer microfluidic device to automate the operation of our circuits. We envision that our proposed design will help scale DNA-based neural networks due to its modularity, simplicity of synthesis, and compatibility with various neural network architectures. This will enable portable computing power for applications in portable diagnostics, compact data storage, and autonomous decision making for lab-on-a-chips.
Read full abstract