The application of Long Short-Term Memory (LSTM) models for streamflow predictions has been an area of rapid development, supported by advancements in computing technology, increasing availability of spatiotemporal data, and availability of historical data that allows for training data-driven LSTM models. Several studies have focused on improving the performance of LSTM models; however, few studies have assessed the applicability of these LSTM models across different hydroclimate regions. This study investigated the single-basin trained local (one model for each basin), multi-basin trained regional (one model for one region), and grand (one model for several regions) models for predicting daily streamflow in water-limited Great Basin (18 basins) and energy-limited New England (27 basins) regions in the United States using the CAMELS (Catchment Attributes and Meteorology for Large-sample Studies) data set. The results show a general pattern of higher accuracy in daily streamflow predictions from the regional model when compared to local or grand models for most basins in the New England region. For the Great Basin region, local models provided smaller errors for most basins and substantially lower for those basins with relatively larger errors from the regional and grand models. The evaluation of one-layer and three-layer LSTM network architectures trained with 1-day lag information indicates that the addition of model complexity by increasing the number of layers may not necessarily increase the model skill for improving streamflow predictions. Findings from our study highlight the strengths and limitations of LSTM models across contrasting hydroclimate regions in the United States, which could be useful for local and regional scale decisions using standalone or potential integration of data-driven LSTM models with physics-based hydrological models.