In recent years, graph neural networks (GNNs) have achieved impressive performance in various application fields by extracting information from graph-structured data. It contains extensive feature aggregation operations and has become a performance bottleneck, which can be abstracted as a specialized Sparse-Dense Matrix Multiplication (SpMM) operation. Previous works have leveraged the inner product or outer product to accelerate the feature aggregation process. However, inefficient execution leads to extremely unbalanced workloads and extensive intermediate data, hampering the performance of previous processors. So in this paper, we demonstrate an algorithm/hardware co-optimization chance to enhance SpMM acceleration for GNNs. First, the algorithm part develops a dataflow-efficient SpMM algorithm that integrates three optimization methods to mitigate computation and memory access inefficiencies. Specifically, 1) the proposed equal-value partition method achieves fine-grained data partition and enables load balancing during data movement. 2) After observing the vertex aggregation phenomenon, a vertex-clustering optimization method is presented to enable significant data locality. 3) The adaptive dataflow based on Gustavson’s algorithm is further implemented to enable the efficient distribution of sparse elements and improves computing resource utilization. Then, the hardware part features the proposed SpMM algorithm and customizes SDMA, a flexible and efficient accelerator to boost SpMM acceleration, which follows the adaptive dataflow to eliminate sparsity and explore the regular parallelism dimension. Finally, we prototype SDMA on the Xilinx Alveo U280 FPGA accelerator card. The results demonstrate that SDMA achieves 5.68× 14.68× energy efficiency over the previous GPU implementations deployed on the Nvidia GTX 1080Ti and 1.32× higher throughput over the state-of-the-art FPGA prototype.