Hierarchical temporal memory (HTM) is a promising unsupervised machine-learning algorithm that models key principles of neocortical computation. One of the main components of HTM is the spatial pooler (SP), which encodes binary input streams into sparse distributed representations (SDRs). In this paper, we propose an information-theoretic framework for the performance comparison of HTM-spatial pooler (SP) algorithms, specifically, for quantifying the similarities and differences between sparse distributed representations in SP algorithms. We evaluate SP's standalone performance, as well as HTM's overall performance. Our comparison of various SP algorithms using Renyi mutual information, Renyi divergence, and Henze–Penrose divergence measures reveals that the SP algorithm with learning and a logarithmic boosting function yields the most effective and useful data representation. Moreover, the most effective SP algorithm leads to superior HTM results. In addition, we utilize our proposed framework to compare HTM with other state-of-the-art sequential learning algorithms. We illustrate that HTM exhibits superior adaptability to pattern changes over time than long short term memory (LSTM), gated recurrent unit (GRU) and online sequential extreme learning machine (OS-ELM) algorithms. This superiority is evident from the lower Renyi divergence of HTM (0.23) compared to LSTM6000 (0.33), LSTM3000 (0.38), GRU (0.41), and OS-ELM (0.49). HTM also achieved the highest Renyi mutual information value of 0.79, outperforming LSTM6000 (0.73), LSTM3000 (0.71), GRU (0.68), and OS-ELM (0.62). These findings not only confirm the numerous advantages of HTM over other sequential learning algorithm, but also demonstrate the effectiveness of our proposed information-theoretic approach as a powerful framework for comparing and evaluating various learning algorithms.