Graph Neural Network (GNN) is a promising technique in representation learning on graph data. Graph Convolution Network (GCN) and Graph Attention Network (GAT) are two main representative techniques in GNN, which can learn node embedding on the graph by aggregating the embedding of the neighborhoods. However, the complex computation process of existing GCN and GAT makes them less effective for tasks with complex graph structures. Moreover, traditional GCN and GAT ignore the time factor in user representation learning, so the obtained user preference model is static and cannot reflect the dynamics of user preferences. In this work, we propose a time-aware Lightweight Graph Convolutional Attention Network (LightGCAN), which can capture static and dynamic user preferences efficiently by using different GNN strategies. Specifically, static user preferences are modeled by a GCN with only neighborhood aggregation, and dynamic user preferences are extracted by a time-aware GAT based on recently interacted items. Static and dynamic user preferences are then combined and fed into a dual-channel Deep Neural Network (DNN) model for feature interaction learning and matching score prediction. Extensive experiments on different datasets show that LightGCAN is superior to the SOTA recommendation method.