Personal blog of Matthias M. Fischer


Sensitivity Analysis of Mathematical Models using Partial Derivatives

Posted: 23rd January 2022

Preface

In this post, I aim to provide a short, simple and easily understanbdale introduction to the topic of sensitivity analysis of mathematical models. First, I will explain the motivation for doing sensitivity analysis from a mathematical modelling point of view. Next, I will explain how to perform such an analysis. Multiple different approaches exist; in this post I will focus only on those which are based on caulculating partial derivatives. However, at the end of this post, I will also mention the basic ideas behind some other approaches as an outlook on (potentially) another post or series of posts in the future.

Motivation

Assume we have some mathematical model (for instance a set of differential equations, or an agent-based simulation). The model has a set of constant parameters \(\mathbf{p} = p_1, p_2, \dots p_i\), and produces some output \(\mathbf{y}\), which we care about. The output might be e.g. a steady state size of a population we want to model, or it might be an eigenvalue of the system which describes population growth, or it might be something entirely different.

We now want to systematically assess how changes in the parameters affect our output variable. There are multiple reasons we might care about these relationships:

Absolute Sensitivities

The first idea is very simple: We simply consider the partial derivative of the output \(y\) with respect to an individual parameter \(p_i\), while keeping the other parameters constant. Thus, we calculate

\(d^{abs}_i = \dfrac{\partial y(\mathbf{p})}{\partial p_i}\)

For sufficiently complex models, where an analytical treatment becomes infeasible, we may reasonably approximate this by using a finite difference with a sufficiently small step size \(\varepsilon > 0\). For the case of a single parameter \(p_1\), for instance, we could write:

\(d^{abs}_1 \approx \dfrac{y(p_1+\varepsilon) - y(p_1)}{\varepsilon}\)

Thus, \(d^{abs}_1\) will tell us the absolute increase in our output variable, if we change \(p_1\) a tiny bit. For the case of multiple parameters, we can then compare these absolute sensitivities in order to judge which parameter will, when slightly alterated, change the output variable the most.

Let's illustrate this with an example: We take a simple, well-known model of protein biosynthesis, which describes the production of a protein at a constant rate \(a\) (with a unit of e.g. \(\mathrm{mol}/h\)) and its degradation at a constant rate \(b\) (with a unit of e.g. \(h^{-1}\)). Thus, we get the following differential equation for the amount of protein \(y(t)\) over time:

\(\dfrac{\mathrm{d} y(t)}{\mathrm{d}t} = a - bx\)

We may easily see that the only equilibrium of this equation is given by

\(\bar y = a/b \quad [\mathrm{mol}]\)

Let us now compute the absolute sensitivities \(d^{abs}_a, d^{abs}_b\) of this equilibrium with respect to our two parameters. We get

\(d^{abs}_a = \dfrac{\partial \bar y(a,b)}{\partial a} = 1/b\)

\(d^{abs}_b = \dfrac{\partial \bar y(a,b)}{\partial a} = -a/b^2\)

If we now plug in some parameter values, we can directly calculate these absolute sensitivities. For instance, let \(a=1, b=0.1\). Then, our sensitivities become

\(d^{abs}_a = 10\)

\(d^{abs}_b = -100\)

This is a nice starting point, however it suffers from a drawback:

Generally, different parameters can be expected to have different baseline values and, potentially, different units of measurement. Thus, a tiny change by \(\varepsilon\) in one parameter might not readily be comparable to a tiny change by the same amount of \(\varepsilon\) in another parameter. For instance, in our example a change of \(0.1\) would correspond to a relative change of \(10\%\) in \(a\), but to a relative change of \(100\%\) in \(b\). Hence, we want to assess parameter sensitivies while taking into account such differences in baseline.

Relative Sensitivities

In order to take into account these differences mentioned above, let us reformulate the problem. Instead of comparing absolute changes in the output variable following an absolute change in one of the system parameters, we want to compare the relative (or fractional) change in the output for a relative change in a parameter value. Hence, we calculate

\(d^{rel}_i = \dfrac{\partial y(\mathbf{p})}{y(\mathbf{p})} \bigg/ \dfrac{\partial p_i}{p_i},\)

which may be rewritten as

\(d^{rel}_i = d^{abs}_i \dfrac{p_i}{y(\mathbf{p})}.\)

Let's apply this to our previous example. We get:

\(d^{abs}_a = (1/b) \cdot a / (a/b) = 1\)

\(d^{abs}_b = (-a/b^2) \cdot b / (a/b) = -1\)

What does this mean? A change by a fraction of \(x\) in parameter \(a\) (or \(b\)), will lead to a change by the exactly same fraction \(x\) (or \(-x\)) in the response variable as well, completely independently of the exact parametrisation we choose for \(a\) and \(b\). This describes the behaviour of the system way more intuitively than our previous approach!

Conclusions and Outlook

Absolute and relative sensativities of a model prediction with respect to its input parameters can easily be assessed by calculating partial derivatives either analytically or numerically. They fulfil important diagnostic and predictive purposes.

As already mentioned in the preface, other methods for computing parameter sensitivies do exist, which do not rely on the (analytical or numerical) calculation of partial derivatives. Most importantly, the inspection of scatter plots should be mentioned here, where the response variable is plotted against an individual single input parameter. The other inputs can be fixed at their baseline value, or they may very well be also randomly or systematically varied within meaningful intervals. Such a visual inspection is always good practice as it easily provides a "feeling" for one's system and its behaviour.

A completely different, "statistics-inspired" approach lies in simply fitting a multiple linear regression of the output variable of interest with all the inputs (and potentially their interactions) as predictors. Especially in case of complex computational models that don't have a closed mathematical form (such as agent-based computational models), this is a very useful approach.

Another general idea is to consider the variance in the response variable, instead of only looking at absolute or relative changes. One may then try to decompose the output variance into those parts which can be attributed to the individual input variables, as well as their combinations. The sensitivity of the output to an input variable is then quantified by the amount of variance in the output caused by that input. Especially in case of nonlinear models, this approachs can be expected to yield more representative results compared to only quantifying absolute or relative changes in a small neighbourhood around the baseline parametrisation.

Except for the scatter plot "method", I myself have never worked with any of those alternative methods (yet), but they seem interesting enough to study in some more detail. Thus, you may expect to see an overview and summary of some of these methods in another blog post in the future.