Abstract

Given the importance of protein aggregation in amyloid diseases and in the manufacture of protein pharmaceuticals, there has been increased interest in measuring and modeling the kinetics of protein aggregation. Several groups have analyzed aggregation data quantitatively, typically measuring aggregation kinetics by following the loss of protein monomer over time and invoking a nucleated growth mechanism. Such analysis has led to mechanistic conclusions about the size and nature of the nucleus, the aggregation pathway, and/or the physicochemical properties of aggregation-prone proteins. We have examined some of the difficulties that arise when extracting mechanistic meaning from monomer-loss kinetic data. Using literature data on the aggregation of polyglutamine, a mutant β-clam protein, and protein L, we determined parameter values for 18 different kinetic models. We developed a statistical model discrimination method to analyze protein aggregation data in light of competing mechanisms; a key feature of the method is that it penalizes overparameterization. We show that, for typical monomer-loss kinetic data, multiple models provide equivalent fits, making mechanistic determination impossible. We also define the type and quality of experimental data needed to make more definitive conclusions about the mechanism of aggregation. Specifically, we demonstrate how direct measurement of fibril size provides robust discrimination.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call