Abstract

Early marker-based metagenomic studies were performed without properly accounting for the effects of noise (sequencing errors, PCR single-base errors, and PCR chimeras). Denoising algorithms have been developed, but they were validated using data derived from mock communities, in which the true sequences were known. Since the algorithms were designed to be used in real community studies, it is important to evaluate the results in such cases. With this goal in mind, we processed a real 16S rRNA metagenomic dataset through five denoising pipelines. By reconstituting the sequence reads at each stage of the pipelines, we determined how the reads were being altered. In one denoising pipeline, AmpliconNoise, we found that the algorithm that was designed to remove pyrosequencing errors changed the reads in a manner inconsistent with the known spectrum of these errors, until one of the parameters was increased substantially from its default value. Additionally, because the longest read was picked as the representative for each cluster, sequences were added to the 3′ ends of shorter reads that were often dissimilar from what had been removed by the truncations of the previous filtering step. In QIIME, the denoising algorithm caused a much larger number of changes to the reads unless the parameters were changed from their defaults. The denoising pipeline in mothur avoided some of these negative side-effects because of its strict default filtering criteria, but these criteria also greatly limited the sequence information produced at the end of the pipeline. We recommend that those using these denoising pipelines be cognizant of these issues and examine how their reads are being transformed by the denoising process as a component of their analysis.

Highlights

  • The emerging field of metagenomics is concerned with determining the numbers and types of organisms in a particular environment

  • It becomes necessary to cluster the reads to a certain percent identity to determine how many types of bacteria, or operational taxonomic units (OTUs), are present in a given sample

  • We have denoised a real 16S rRNA metagenomic dataset and analyzed the changes produced by each step of five denoising pipelines

Read more

Summary

Introduction

The emerging field of metagenomics is concerned with determining the numbers and types of organisms in a particular environment. A single PCR reaction of DNA extracted from a given sample can yield more than a million sequence reads in a pyrosequencing run. While many of these reads may be identified as belonging to well-studied species [2,3], some may not match any sequences in the database, because the bacteria have not been cultured or otherwise classified. Those reads may represent new species, or they may be rare variants of a known species that differ from the database sequence by just a few nucleotides. It becomes necessary to cluster the reads to a certain percent identity (generally, 3% for species-level clustering) to determine how many types of bacteria, or operational taxonomic units (OTUs), are present in a given sample

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call