Unsupervised, data-driven methods are commonly used in neuroscience to automatically decompose data into interpretable patterns. These patterns differ from one another depending on the assumptions of the models. How these assumptions affect specific data decompositions in practice, however, is often unclear, which hinders model applicability and interpretability. For instance, the hidden Markov model (HMM) automatically detects characteristic, recurring activity patterns (so-called states) from time series data. States are defined by a certain probability distribution, whose state-specific parameters are estimated from the data. But what specific features, from all of those that the data contain, do the states capture? That depends on the choice of probability distribution and on other model hyperparameters. Using both synthetic and real data, we aim to better characterize the behavior of two HMM types that can be applied to electrophysiological data. Specifically, we study which differences in data features (such as frequency, amplitude, or signal-to-noise ratio) are more salient to the models and therefore more likely to drive the state decomposition. Overall, we aim at providing guidance for the appropriate use of this type of analysis on one- or two-channel neural electrophysiological data and an informed interpretation of its results given the characteristics of the data and the purpose of the analysis.NEW & NOTEWORTHY Compared with classical supervised methods, unsupervised methods of analysis have the advantage to be freer of subjective biases. However, it is not always clear what aspects of the data these methods are most sensitive to, which complicates interpretation. Focusing on the hidden Markov model, commonly used to describe electrophysiological data, we explore in detail the nature of its estimates through simulations and real data examples, providing important insights about what to expect from these models.