Abstract

Parameter identifiability methods assess whether the parameters of a model are uniquely determined by the observations. While the success of a model fit can provide some information on this, it can be valuable to determine identifiability before any fit has been attempted, or to separate identifiability from other issues. Two concepts that lean themselves well for identifiability analysis and have been underutilized are the sensitivity matrix (SM) and the Fisher information matrix (FIM). This paper presents two newly developed methods, one based on the SM and one based on the FIM. Both methods can assess local identifiability for a wide set of models, can be used with limited effort, and are freely available. The methods require the proposed model in the form of a set of differential equations, the parameter values, and the study design as input. They can be used a priori, as they do not need observed values or a successful model fit. Traditional methods provide a single categorical (yes/no) answer to the question of identifiability. In many cases, this is not very informative, and identifiability depends on study design (e.g., dose levels or observation times) and parameter values. Indicators on a continuous scale characterizing the level of identifiability would provide more detailed and relevant information, for example, to guide model development. Our two methods provide both categorical and continuous indicators. Both methods indicate which parameter combinations are difficult to identify by calculating the directions in parameter space that are least identifiable. The methods were validated with an example problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call