Abstract

Recent improvements in the speed and sensitivity of liquid chromatography-mass spectrometry systems have driven significant progress toward system-wide characterization of the proteome of many species. These efforts create large proteomic datasets that provide insight into biological processes and identify diagnostic proteins whose abundance changes significantly under different experimental conditions. Yet, these system-wide experiments are typically the starting point for hypothesis-driven, follow-up experiments to elucidate the extent of the phenomenon or the utility of the diagnostic marker, wherein many samples must be analyzed. Transitioning from a few discovery experiments to quantitative analyses on hundreds of samples requires significant resources both to develop sensitive and specific methods as well as analyze them in a high-throughput manner. To aid these efforts, we developed a workflow using data acquired from discovery proteomic experiments, retention time prediction, and standard-flow chromatography to rapidly develop targeted proteomic assays. We demonstrated this workflow by developing MRM assays to quantify proteins of multiple metabolic pathways from multiple microbes under different experimental conditions. With this workflow, one can also target peptides in scheduled/dynamic acquisition methods from a shotgun proteomic dataset downloaded from online repositories, validate with appropriate control samples or standard peptides, and begin analyzing hundreds of samples in only a few minutes.

Highlights

  • Reports on the poor reproducibility of scientific results [1] highlight the need for better experimental design, greater effort devoted to validation of novel discoveries, improved hypothesis testing, and stricter publishing requirements

  • To reduce the amount of time necessary to develop targeted proteomic methods we established a workflow to utilize the peptide information acquired from shotgun proteomic experiments

  • Spectral libraries are an integral part of data independent acquisition (DIA) and dependent acquisition (DDA) proteomic workflows

Read more

Summary

Introduction

Reports on the poor reproducibility of scientific results [1] highlight the need for better experimental design, greater effort devoted to validation of novel discoveries, improved hypothesis testing, and stricter publishing requirements. These efforts are especially important for systems-wide studies conducted in laboratories across the world that are commonplace in proteomic research [2,3,4]. Quantifying proteins from microbes contributes to successful comparative analysis of environmental isolates and engineered microbes for production of biofuels and bioproducts These concerns are driving improvement in analytical protocols, data quality metrics, and reporting [5]. To implement appropriately powered, statistically significant studies of biological systems with low signal to noise, many samples must be analyzed, which is a significant challenge even for the most well-resourced proteomic groups [6]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call