The brain’s memory system is extraordinarily complex, evidenced by the multitude of neurons involved and the intricate electrochemical activities within them, as well as the complex interactions among neurons. Memory research spans various levels, from cellular and molecular to cognitive behavioral studies, each with its own focus, making it challenging to fully describe the memory mechanism. Many details of how biological neuronal networks encode, store, and retrieve information remain unknown. In this study, we model biological neuronal networks as active directed graphs, where each node is self-adaptive and relies on local information for decision-making. To explore how these networks implement memory mechanisms, we propose a parallel distributed information access algorithm based on the node scale of the active directed graph. Here, subgraphs are seen as the physical realization of the information stored in the active directed graph. Unlike traditional algorithms with global perspectives, our algorithm emphasizes global node collaboration in resource utilization through local perspectives. While it may not achieve the global optimum like a global-view algorithm, it offers superior robustness, concurrency, decentralization, and biological feasibility. We also tested network capacity, fault tolerance, and robustness, finding that the algorithm performs better in sparser network structures.