Important public policy decisions are often very complex, involving many factors, large uncertainties, and multiple conflicting objectives. Decision analysis, which is a formal framework for dealing with such issues, has been applied to a variety of public-sector problems over the last two decades. A limitation of using decision analysis is the often onerous assessment burden it places on the decision maker (DM). We have developed a novel interactive procedure and Decision Support System (DSS) for performing decision analysis on the personal computer, called robust interactive decision analysis (RID), which avoids the difficult problems of precisely assessing a DM's utility and state probability functions associated with traditional decision treee analysis. The RID method permits a DM to voluntarily and interactively express strong (viz, sure) binary preferences for actions, partial decision functions, and full decision functions, and only imprecise probability and utility function assessments. The inputs are used with various dominance operators via a mathematical programming framework to prune the state probability, utility, and decision space until there is convergence on a relatively small subset of efficient strategies or an optimal choice strategy. Conceptually, the operation of the RID method can be regarded as a state-pruning system and its computer implementation as a DSS. In this paper, we extended the RID concept and DSS to deal with multiple-criteria decisions, obviating the need to precisely and completely assess attribute weights (tradeoffs), consequences, and/or value/utility functions. The approach, which we call multiple-criteria robust interactive decision analysis, MCRID, is developed, illustrated, and shown to be effective in eliminating many inefficient alternatives. However, since the aggregate utility of each alternative is interval-valued, it then becomes difficult to intuitively choose an optimal alternative from among the remaining efficient ones since the intervals are not defined by any specific probability distributions. We have tentatively shown that aggregate utility intervals ultimately converge toward a normal density function, and can be so approximated with relatively few attributes. Hence, stochastic dominance is employed to further filter out alternatives. An interactive goal programming (GP) technique is then developed to assist the DM in choosing an optimal alternative.