Human forecasts and other probabilistic judgments can be improved by elicitation and aggregation methods. Recent work on elicitation shows that deriving probability estimates from relative judgments (the ratio method) is advantageous, whereas other recent work on aggregation shows that it is beneficial to transform probabilities into coherent sets (coherentization) and to weight judges' assessments by their degree of coherence. We report an experiment that links these areas by examining the effect of coherentization and multiple forms of coherence weighting using direct and ratio elicitation methods on accuracy of probability judgments (both forecasts and events with known distributions). We found that coherentization invariably yields improvements to accuracy. Moreover, judges' levels of probabilistic coherence are related to their judgment accuracy. Therefore, coherence weighting can improve judgment accuracy, but the strength of the effect varies among elicitation and weighting methods. As well, the benefit of coherence weighting is stronger on “calibration” items that served as a basis for establishing the weights than for unrelated “test” items. Finally, echoing earlier research, we found overconfidence in judgment, and the degree of overconfidence was comparable between the two elicitation methods.