Abstract

BackgroundThe acceptance of microarray technology in regulatory decision-making is being challenged by the existence of various platforms and data analysis methods. A recent report (E. Marshall, Science, 306, 630–631, 2004), by extensively citing the study of Tan et al. (Nucleic Acids Res., 31, 5676–5684, 2003), portrays a disturbingly negative picture of the cross-platform comparability, and, hence, the reliability of microarray technology.ResultsWe reanalyzed Tan's dataset and found that the intra-platform consistency was low, indicating a problem in experimental procedures from which the dataset was generated. Furthermore, by using three gene selection methods (i.e., p-value ranking, fold-change ranking, and Significance Analysis of Microarrays (SAM)) on the same dataset we found that p-value ranking (the method emphasized by Tan et al.) results in much lower cross-platform concordance compared to fold-change ranking or SAM. Therefore, the low cross-platform concordance reported in Tan's study appears to be mainly due to a combination of low intra-platform consistency and a poor choice of data analysis procedures, instead of inherent technical differences among different platforms, as suggested by Tan et al. and Marshall.ConclusionOur results illustrate the importance of establishing calibrated RNA samples and reference datasets to objectively assess the performance of different microarray platforms and the proficiency of individual laboratories as well as the merits of various data analysis procedures. Thus, we are progressively coordinating the MAQC project, a community-wide effort for microarray quality control.

Highlights

  • The acceptance of microarray technology in regulatory decision-making is being challenged by the existence of various platforms and data analysis methods

  • We describe an alternative analysis of Tan's dataset with the intention to address several common issues related to cross-platform comparability studies such as intra-platform consistency and the impact of different gene selection and data filtering procedures

  • RNA preparations are run in triplicates on each platform, resulting in three pairs of technical replicates

Read more

Summary

Introduction

The acceptance of microarray technology in regulatory decision-making is being challenged by the existence of various platforms and data analysis methods. Science, 306, 630–631, 2004), by extensively citing the study of Tan et al (Nucleic Acids Res., 31, 5676–5684, 2003), portrays a disturbingly negative picture of the cross-platform comparability, and, the reliability of microarray technology. Standardization is much needed before microarrays – a core technology in pharmacogenomics and toxicogenomics – can be reliably applied in clinical practice and regulatory decision-making [1,2,3,4]. U.S FDA is actively assessing the applicability of microarrays as a tool in pharmacogenomic and toxicogenomic studies, we are interested in information regarding the reliability of microarray results and the cross-platform comparability of microarray technology. Several studies that address cross-platform comparability report mixed results [6,7,8,9,10,11,12,13,14,15].

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call