Abstract
Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.