Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine (SJTREM) receives about 220 submissions for publication every year. Based on the principles of open access publishing, all scientifically sound manuscript that are original and in compliance with ethical and author guidelines will be considered for publication. Such editorial policies are in effect in most journals, and together with rigorous peer-review they are the cornerstones of scientific dissemination. The rejection rates for submitted papers and the citations to published papers are considered when overall quality or impact of a journal is evaluated; however, both these metrics have serious shortcomings and can’t be used for comparison between journals without caution. SJTREM currently rejects 60% of all submitted papers; the current impact factor is 1.85. In clinical medicine we want to provide our patients with the best possible treatment, and we need to know whether some new intervention is more effective than what we have been used to doing. To find out, we need clinical trials. The crucial point in any comparison is to be fair, and in scientific terms this means to rule out as many sources of bias and confounding as possible. Randomization is a powerful tool to ensure that unknown factors will be evenly distributed among the experimental groups, and the randomized controlled trial (RCT) is the golden standard. But even the most elegant study could end up unpublished if the results are unexpected (or even unwanted!), or the results may be tweaked to satisfy a study sponsor’s financial interests. Fear of such publication bias led to registries of clinical trials. Registration has become required for publication in most medical journals [1], and also regulated by laws in some countries. The purpose of such registration is mainly to have publicly disclosure of the ethical considerations, intentions and rationale before recruiting subjects into the trial, and secondarily it will help avoid planning and funding studies that already are running; finally it may serve as a data repository after the end of the study to provide other researchers opportunity to verify or collate results. For journals that must decide on publication, these registries provide opportunity to verify that the submitted paper adheres to the original plan for conduct and analysis. Even so, a recent analysis of 40 RCTs published after 2005 revealed improper presentation of outcomes or analyses in up to 75% of the studies when compared with the original protocol, and late registration into ClinicalTrials.gov in a similar proportion (url: http://www.ncbi.nlm.nih.gov/books/NBK100613/, accessed January 18, 2013). The reasons mentioned above give sound reason to publish or make publicly available full protocols of clinical trials. For the editors, the question remains whether the protocol represents a redundant publication when it is already publicly available in registries. For the competitive researcher, the question may be how to get properly rewarded for the huge amount of work that has been put into protocol development, in a system where counting of publications may define your further funding or promotions. An additional issue of grievance for the publisher may be that even if agreeing to publish the protocol, there may be small chances to receive the final results for publication due to an understandable quest for high-impact journals and prestigious papers. Sten Rubertsson and colleagues here present their protocol from the LINC study, which just finished inclusion of patients. The LINC study is a randomized controlled trial of mechanical chest compressions with LUCAS™ combined with a specific algorithm of chest compressions and defibrillations. After careful editorial consideration, SJTREM has decided to publish the protocol even if the research and analysis plan have been publicly available on ClinicalTrials.gov since 2008 (NTC 00609778). The published protocol provides additional rationale and background for the trial, as well as a detailed description of the organization of the trial; hopefully it may aid others to organize a clinical trial. However, as the trial is now concluded, there is little left for peer-review to change besides issues of language and clarity. We will therefore print the original protocol more or less as submitted and instead provide a critical commentary of the protocol in this editorial. The authors have the opportunity to respond septely. As an aid in this process, we will follow the 10-dimensional Trial Quality assessment list suggested by Berger and Alperson [2]. This paper aims at identifying unique aspects that may influence trial quality, and differs from previous systems for trial assessment by proposing that the final “quality score” should be obtained by multiplying individual scores. We have avoided the scoring, though, and apply the list in only in a “check-list” or qualitative way. We will also briefly comment on what was made public on ClinicalTrials.gov, and finally on what we perceive as strengths and weaknesses of the study.