In this study, we propose a novel deep-learning architecture with sparse learning for human activity recognition. The proposed model contains 1D CNNs and LSTM layers with a self-attention mechanism to enhance a substantial number of time points in time-series data for human activity recognition systems. Based on the recent success of squeeze-and-excite (SE) networks, the proposed deep learning model utilizes the SE module to enhance channel-wise interdependencies, which in turn leads to a boost in performance. In addition, we utilized sparse learning to retrain only weak nodes and freeze stronger nodes in a fully connected layer prior to classification layer. Furthermore, we utilized an entropy-inspired formula to find sparsely located weaker nodes and validated our model on various datasets, including Opportunity, UCI-HAR, and WISDM. Herein, we present an extensive analysis and survey of state-of-the-art studies, in addition to our proposed research. For a fair comparison, we evaluated our deep learning architecture using various performance metrics and achieved better results; the proposed model outperformed state-of-the-art algorithms for human activity recognition.