Rigorous validation of musculoskeletal models is time consuming and often technically difficult. And so, the question arises, “How good is good enough?” The goal of this paper is to offer a response to that question. The discussion will be complemented by relevant validation examples from the open literature pertaining to one commercial musculoskeletal simulation software, the AnyBody Modeling System (AMS).Given that a model has been built with good (verified) software, validation is the critical step of corroborating that model predictions are representative of the physical system. There is a relatively well-established validation vocabulary [1] that mirrors concepts derived from other fields (e.g., FEA, CFD). Direct measurement of model-predicted quantities is preferred. When outcome metrics are difficult or impossible to measure directly, indirect validation through surrogate measures is common. And trend validation ensures that the complex interactions among many musculoskeletal parameters are consistent with physical behavior. There are two ways to build additional confidence in individual models: seek more accessible validation data from a level farther down the validation hierarchy [2], and look to the open literature.Considering the literature related to AMS, there are more than 60 peer-reviewed studies in the scientific literature that speak specifically to validation [3]. Many meet the high standard of direct validation. Approximately half of the studies report indirect comparisons of surface EMG traces to predicted muscle activation, and about 10% of them report interface forces between the body and the external environment. Table 1 summarizes the range of body regions and activities included among published AMS validation studies. Such a body of historical evidence does not promise model validity in future studies but adds to the credibility within the tested body models and classes of tasks. This type of development is typical for CAE technologies and eventually lead to establishment of best practices within which the technologies are trusted.Thielen and colleagues [4] performed a direct quantitative validation of hip joint force using an instrumented hip implant during stair ascent (Fig. 1). It would be fair to say this model is good enough to predict hip joint loads for a specific patient during stair climbing. Level gait should be validated separately, but if that were not possible, these data do add confidence that gait predictions would also be reasonable.Fig. 2 shows a validation for reaction forces on the pedals of a recumbent bicycle [5]. If this model were to be used for product design, it could be argued that it is certainly good enough; if used to prescribe rehab following ACL reconstruction, perhaps more accuracy would be needed.The short answer to the question, “How good is good enough,” in musculoskeletal model validation is: “It depends.” Sound validation practices are a start, but the specific application will dictate the level of rigor required to ensure quantitative accuracy.