Ricklefs and Renner (2000) and Ricklefs and Starck (1997)argue that comparative studies that have used phylogeneti-cally independent contrasts (PIC) usually add little or nothingto the results that can be obtained using comparative methodsthat ignore phylogeny. Although it is true that the results ofDodd et al (1999) are remarkably similar to those of Ricklefsand Renner (1994), this demonstrates the robustness of thelatter’s results rather than the redundancy of the phylogeneticmethods used by Dodd et al (1999) in their reanalysis. Thereare many instances where PIC and cross-species comparativeanalyses (TIP) produce similar results (Ricklefs and Starck1997, Ackerly 2000), but there are also exceptions whichcaution against the assumption that phylogeny can be rou-tinely ignored. For example, Tofts and Silvertown (2000)found quite different results from PIC and TIP analysis oftwelve plant traits including a major effect of one trait af-fecting community assembly that was only revealed to besignificant using PIC.TIP analyses are prone to pseudoreplication and simulationstudies have demonstrated the unacceptably high Type I errorrate of the method (Martins and Garland 1991). Why, then, doso many empirical studies show that PIC and TIPresultsconcur?One possible reason is that biologists choose to test patterns oftrait correlation which they have good reason to believe alreadyexist in their data. This was certainly true of RicklefsandRenner(1994), whose work confirmed the earlier results of Erikssonand Bremer (1992) and others. Science proceeds by pursuingpositive results in preference to negative ones (Silvertown andMcConway 1997), but to forget this risks complacency. To testthe idea that concurrence of PIC and TIP results is influencedby how hypotheses are selected, we have performed a Bayesiananalysis of the two methods.Consider a study where a correlation under investigationis significant (P , 0.05), using a TIP procedure. As is wellknown, the P-value derived from a statistical test is not theprobability that the null hypothesis is true; it is the probabilityof getting results as extreme as those actually observed,underthe assumption that the null hypothesis is true. Using a Bayes-ian approach to calculate the probability that the observedcorrelation really exists in nature and is not simply the resultof random sampling variability, one needs to know the ‘‘pri-or’’ probability, unconditional on the observed data, that thehypothesis of a real correlation in nature is actually true. Ifthe correlation being investigated had simply been chosen atrandom from all the possible trait correlations, the prior prob-ability that it is real would be likely to be very low—forillustration let us take it as 0.01. If, as is much more realisticin our view, the correlation had been selected for investi-gation because it looked potentially interesting on theoreticalgrounds and because of indicative positive results from pre-vious studies, the prior probability of a real correlation wouldbe much higher—0.5 is a plausible value. Indeed, if the re-search depends on funding, an even higher prior probabilitysuch as 90% might well be appropriate, because a study forwhich the prior evidence of a real correlation was only 50%would be unlikely to be funded.The other necessary inputs to a crude Bayesian analysisare the probabilities of positive and negative results, whichare conditional on the actual correlation present in nature.Information on plausible sizes for these probabilities can beobtained from simulation studies, where it is known what thetrue ‘‘state of nature’’ is. For example, the simulation studyof Purvis et al. (1994, tables 1, 2) found that TIP correctlyidentified significant correlations (P , 0.05) in 58.6% ofcases and PIC did so in 68.6% of cases. Where the truecorrelation was actually zero, TIP erroneously found a sig-nificant correlation in 31.4% of cases and PIC in 12.4%.Using these figures and Bayes Theorem, we compared theperformance of PIC and TIP with the prior probability of areal correlation set at 0.01, 0.50 or 0.90, and obtained theresults shown in Table 1. The results show the huge impactthat the prior probability of a true correlation can have onthe interpretation of a significant test result. With a priorprobablity of 0.50, TIP performs only 15% better than guess-work (0.651–0.500, Table 1).Similar calculations can be used to investigate the extentto which TIP and PIC results are likely to concur. To do sofully would require results from simulation studies on howoften the results of the two methods agree, in terms of givingstatistically significant correlations. Such information is notavailable in reports of simulation studies, because it is notdirectly relevant to what they are investigating. However, wecan make some progress by noting that the observed perfor-