This paper proposes a framework of models for making information system assessments and provides empirical evidence relevant to the framework. Perceptions of the decision language and degree of structure appropriate to each model are tested, as are the impact of training and experience on the perceived usefulness of various assessment models. Results indicate that assessment models primarily based on quantitative language were perceived as more useful when executed as structured procedures, and models primarily based on qualitative language were perceived as more useful when executed as unstructured procedures. In addition, perceptions of a decision model's usefulness were affected by participants' training and experience. The findings suggest that no single model is perceived as rich enough to encompass a full range of decision languages and procedures, and that the perceived usefulness of any given model depends on an individual's training and experience. Triangulation and dialectic inquiry are suggested as possible multimodel strategies useful in enriching information system assessment practice.