How well do incentivized belief elicitation procedures work in online settings? We evaluate the quality of beliefs elicited from online respondents, comparing several characteristics of two widely used complex elicitation mechanisms (the Binarized Scoring Rule – BSR – and a stochastic variation of the Becker–deGroot–Marschak mechanism—BDM) against a flat fee baseline for a variety of beliefs (induced probabilities, first-order factual knowledge, second-order knowledge of others). We find that the flat-fee method requires the least amount of time, the BDM is the most difficult to understand, and that there are no differences in the average accuracy of induced beliefs across conditions. However, the methods are significantly different in terms of the frequency of first-order and second-order beliefs reported at exactly 50%: the flat-fee method leads to the most mass on this belief, followed by BDM and BSR. Regarding induced beliefs, we also find that less-educated participants’ accuracy is higher in the complex incentives treatments, and that attention, numeracy, and education are positively associated with the quality of these beliefs across methods. Our results suggest that the quality of beliefs elicited in online environments may depend less on the formal incentive compatibility properties of the elicitation procedure (whether the procedure prevents “dishonest” reporting) than on the difficulty of comprehending the task and how well incentives induce cognitive effort (thereby inducing subjects to quantify or construct their beliefs).