Synchronization of action potentials is one of the important phenomena for neural networks to achieve biological functions. How to find the optimal parameter space of neural networks in a synchronized state is of great significance for studying their synchronization. This study explores the parameter space of triplet motif-based higher-order networks in a synchronized state through the dynamic learning of synchronization (DLS) technique, which dynamically modulates the connection weights between motifs to alter their firing patterns. Our study delves into regular, Erdós-Rënyi random graphs, small-world, and scale-free networks, emphasizing the high-order motif interactions that characterize these networks. Our key findings indicate that the DLS technique successfully promotes synchronization within high-order motif networks with various connection patterns, although the degree of synchronization in networks where motifs are interconnected by chemical synapses is slightly weaker than those interconnected by electrical synapses. Additionally, we demonstrate the pattern of weight changes during the regulation of network firing states by DLS, finding that the evolution of weight distributions correlates with the network's topological structure. This work might provide new insights into complex network synchronization and lays the foundation for further exploration of using DLS technology to synchronize higher-order networks through external factors. Published by the American Physical Society 2024
Read full abstract