Lawrence Phillips (London School of Economics and Political Science) As we sat down to lunch in the staff dining room at University College London about 30 years ago, Dennis Lindley introduced me to a distinguished looking gentleman who to my delight turned out to be Professor Pearson. My youthful enthusiasm for Bayesian statistics, kindled at the University of Michigan in the 1960s by Ward Edwards and Jimmy Savage, led us to discuss the difficulty of obtaining a prior distribution. Professor Pearson gently explained that he could see no way of determining meaningful prior probabilities from the scientists with whom he had worked. The three papers presented today all say, 'Yes, Professor Pearson, it can be done'. However, the authors are providing perspectives that are very different from those of 30 years ago. First, the papers go beyond exploiting the properties of vague prior opinion to justify using non-informative priors, as in the principle of stable estimation (Edwards et al., 1963). For these examples, prior opinion is not vague and it matters. Second, the key question is not how to assess prior opinion, as O'Hagan recognizes; the more general issue is how to obtain meaningful and useful representations of expert uncertainty expressed in probabilistic form. That could apply to priors, likelihoods, posteriors, predictions or indeed any uncertainties. Third, as human beings, experts naturally experience uncertainty as a feeling, not as a numerical probability, so a good elicitation process helps the experts to learn what numbers are appropriate representations of their feelings and minimizes the biases summarized by Kadane and Wolfson. Fourth, we have access to computing power that can reduce the complexities of elicitation and make consistency checking an easy task, as is demonstrated to excellent effect by the University of Durham authors, as well as the others. Finally, all the authors make it clear that the process of assessing expert uncertainty deserves careful attention, in my view at least as much care as is devoted to the design of experiments. Most of us can distinguish a well-designed experiment from a poor one. But would we agree on criteria for judging the acceptability of an elicitation process? These papers suggest several criteria, which I shall summarize, supplemented by my own experience. The most important consideration is that proper elicitation is a sociotechnical process, as demonstrated in the early handbook by Stael von Holstein and Matheson (1979). The interaction between assessor and expert(s) requires careful handling of both social and technical issues at each of the following three main stages in the assessment process.