R I asked a colleague who had just returned from his medical society’s annual scientific meeting whether the meeting was a success. He countered by asking, ‘‘How do you define success?’’ As one dyed-in-the-wool decision scientist listening to another, I nodded in appreciation of his question, for success, at least as it pertains to annual meetings, is a multiattribute construct. For the attendee, success might be defined by new information, skills, and techniques attained, be they clinical, methodologic, or educational; by meeting people in one’s field for the first time; by networking (or simply schmoozing) with old friends and colleagues; or perhaps by recruiting someone or by being recruited by someone. For the meeting organizing committee, success might be measured by meeting attendance and by how smoothly the meeting went. For the society’s administration, certainly one aspect of a meeting’s success is the revenue it generates toward offsetting the society’s operating expenses. Yet for a scientific meeting such as the Society for Medical Decision Making’s (SMDM’s) annual meeting, the main measure of success, arguably, is the quality of the research presentations. How can one judge the quality of a research project, particularly when it is compressed into a 10-minute oral presentation or shoe-horned onto a 4×8-foot poster board in a crowded ballroom? The ultimate litmus test of an abstract presentation is whether it begets a peer-reviewed publication, an outcome that cannot be ascertained until months or years after everyone goes home. In this issue of Medical Decision Making (MDM), Greenberg and colleagues published a study of the 4-year outcomes of research studies presented at the 2003 SMDM annual meeting. Their results showed that, of 239 abstracts presented either as oral or poster presentations, 64 (27%) were ultimately published as full-length manuscripts in peer-reviewed journals. Studies of the fate of meeting abstracts date back to the pre-SMDM era. The Cochrane Collaboration recently reviewed 79 studies of outcomes of abstracts presented at medical meetings, finding that 44.5% of abstracts completed the metamorphosis into a fulllength paper, with oral abstracts, abstracts reporting statistically significant findings, and abstracts reporting on randomized controlled trials faring the best. The article by Greenberg and coauthors is thus certainly not the first to examine the fate of scientific meeting abstract presentations, but it is perhaps the first to examine abstracts presented at a medical decision making, health policy, health services research, or health economics meetings. As with previous studies, Greenberg and colleagues found that publication rates were greater for oral presentations than for poster presentations, in their case by a 2:1 margin. In putting together the scientific program, the program committee generally assigns the highest-rated abstracts to oral abstract sessions, so it comes as no surprise that oral abstracts tend to have higher publication rates. That finding notwithstanding, we do not know what the respective manuscript submission rates were. In other words, it is possible that oral abstracts were more likely to be submitted for publication as manuscripts, but it is also possible that similar proportions of orals and posters were submitted for publication, but the rejection rate was greater for posters. Other forms of publication bias could have From the Section of Outcomes Research, Division of General Internal Medicine, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; Center for Clinical and Translational Science and Training, University of Cincinnati Academic Health Center, Cincinnati, Ohio; Center for Clinical Effectiveness, University of Cincinnati Academic Health Center, Cincinnati, Ohio; and Health Services Research and Development Service, Veterans Affairs Medical Center, Cincinnati, Ohio. Supported in part by grant number K24 AT001676 from the National Center for Complementary and Alternative Medicine.