Accurate predictions of molecular properties are crucial for advancements in drug discovery and materials science. However, this task is complex and requires effective representations of molecular structures. Recently, Graph Neural Networks (GNNs) have emerged as powerful tools for this purpose, demonstrating significant potential in modeling molecular data. Despite advancements in GNN predictive performance, existing methods lack clarity on how architectural choices, particularly activation functions, affect training dynamics and inference stages in interpreting the predicted results. To address this gap, this paper introduces a novel activation function called the Sine Linear Unit (SLU), aimed at enhancing the predictive capabilities of GNNs in the context of molecular property prediction. To demonstrate the effectiveness of SLU within GNN architecture, we conduct experiments on diverse molecular datasets encompassing various regression and classification tasks. Our findings indicate that SLU consistently outperforms traditional activation functions on hydration free energy (FreeSolv), inhibitory binding of human β secretase (BACE), and blood brain barrier penetration (BBBP), achieving the superior performance in each task, with one exception on the GCN model using the QM9 data set. These results underscore SLU’s potential to significantly improve prediction accuracy, making it a valuable addition to the field of molecular modeling.