Abstract

This is written in response to Wynne and Botti's (in press) commentary on our study (Oh & Seo 2003), which was performed to give the integrative picture on the effects of interventions that were applied to prevent endotracheal suction-induced hypoxia. Although investigators (Barnes & Kirchhoff 1986) conducted systemic reviews on this matter, it was still difficult to arrive at precise conclusions about the effectiveness of such interventions. Therefore, we attempted to clarify the effects of such interventions by synthesizing various study results using meta-analysis. The comments from Wynne and Botti can be divided three points: (1) whether the searching procedures for studies were comprehensive, (2) whether the selection or publication bias was avoided, and (3) whether the validity of studies was assessed and study results were combined effectively. Followings are the replies on these three points. To obtain a sample for this meta-analysis, a computerized search was performed through MEDLINE in addition to tracking down references cited in bibliographies of past reports. As we tracked down all references (including unpublished studies) cited in several review studies on related topics, we inferred to search the considerable amount of relevant studies. Figure 1 shows searching procedures for studies included in this meta-analysis. The searching procedures for studies included in meta-analysis. As noted by Wynne and Botti, a selection/publish bias due to the possible existence of unknown or unpublished results must be reduced to ascertain the reliability of meta-analysis. Undoubtedly, all possible efforts for a comprehensive search are needed. However, searching and obtaining unpublished studies are not practically easy to access. Moreover, whether relevant studies (including unpublished results) are searched sufficiently cannot be either confident or objectively evaluated. One way to solve this problem is estimating the magnitude of the publication bias by calculating the fail-safe number, which is the minimum number of unpublished studies with non-significant findings that would be needed to overturn the conclusion of meta-analysis. The larger the fail-safe numbers the safer from the publication bias the results will be. The calculated fail-safe number in our study was 25–33, inferred that the results of this meta-analysis could be overturned if 25–33 unpublished studies with non-significant findings were included additionally. This number signifies that the meta-analysis in this study was relatively safe from publication bias. Another way to verify the publication bias is conducting the meta-regression (weighted regression). If the result of meta-regression shows that the numbers of subject in each study significantly affects the combined effect size, then the presence of selection/publish bias would be doubted. We evaluated the quality of studies included in this meta-analysis on the basis of following points. For randomized clinical trials, five-point scaled evaluation was applied: (1) whether an appropriate ETC protocol was used (Yes/No or Non-mentioned), (2) whether the intervention protocol was clearly explained and used it consistently throughout the study (Yes/No or Non-mentioned), (3) whether the sample size was adequate (Yes/No), (4) whether the homogeneity of the degree of respiratory failure within the study subjects was achieved (Yes/No), and (5) whether the randomization of subjects into each group was used (Yes/No). For within-subject designed studies, six-point scaled evaluation was applied: (1) whether an appropriate ETC protocol was used (Yes/No or Non-mentioned), (2) whether the intervention protocol was clearly explained and used it consistently throughout the study (Yes/No), (3) whether the sample size was adequate (Yes/No), (4) whether the homogeneity of the degree of respiratory failure within the study subjects was achieved (Yes/No), (5) whether the order effect was adequately controlled (Yes/No), and (6) whether the latent effect was adequately controlled (Yes/No). In this study, the quality score for randomized clinical trials was 4–5 (range: 0–5; 53% of studies were scored 4 and 47% scored 5). In the case of within-subjects designed studies, the quality score was 4–5 (range: 0∼6; 57% of studies were scored 4 and 43% scored 5). Commonly, the sensitivity test was conducted by examining whether the combined effect size is significantly different with the quality scores of studies. However, we did not perform such a sensitivity test because most studies showed high validity with the quality scores of 4–5 as mentioned above. On the other hand, we evaluated the robustness of the pooled effect size by removing trials with the largest effect size consecutively until pooled effect size became insignificant. In general, the effect size is regarded as robust when the pooled effect size is still significant in spite of removing more and more trials. In this meta-analysis, the pooled effect size became non-significant after removing more than 10 studies. Another way to conduct the sensitivity test is comparing the overall effect sizes derived from random-effect model and fixed-effect model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call