The Landsat multispectral time series is a valuable source of moderate spatial resolution data to support forest mapping and monitoring tasks. Using United States Department of Agriculture (USDA) Forest Service Forest Inventory and Analysis (FIA) plots within the states of Michigan, Oregon, and West Virginia, two methods to summarize time series observations, harmonic regression coefficients and Global Land Analysis & Discovery (GLAD) Phenology metrics, are compared for predicting forest community type, total aboveground live biomass (AGLBM), and species-specific AGLBM. Harmonic regression coefficients, which provided mean overall accuracies (OAs) between 62.8% and 73.1% and map image classification efficacies (MICEs) varying between 0.455 and 0.566 for the three studied states and calculated using multiple model replicates, generally provided better predictive performance for differentiating forest community types in comparison to GLAD phenology metrics. However, differences were not always statistically significant. DTM-derived terrain variables improved classification performance in some landscapes when using machine learning, random forest classification (for example, the highest obtained mean OA of 78.5% (MICE = 0.557) was obtained for West Virginia when using the combined harmonic regression and terrain variables). For the regression-based estimation of total and species-specific AGLBM, the spectral data used and the incorporation of terrain variables had less of an impact on model performance. Further, the regression models in Oregon provided larger mean R-squared values (0.452 to 0.490) in comparison to those in Michigan and West Virginia, where all R-squared values were lower than 0.200, suggesting that AGLBM prediction may be more or less challenging in different landscapes depending on forest characteristics, terrain, management practices, and disturbance histories.