Bounding sums of random variables, part 1
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).
This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).
Let denote the set of univariate continuous distribution function, left-continuous, on . And the set of distributions on . Thus, if and . Consider now two distributions . In a very general setting, it is possible to consider operators on . Thus, let denote an operator, increasing in each component, thus that . And consider some function assumed to be also increasing in each component (and continuous). For such functions and , define the following (general) operator, as
One interesting case can be obtained when is a copula, . In that case,
and further, it is possible to write
It is also possible to consider other (general) operators, e.g. based on the sum
or on the minimum,
where is the survival copula associated with , i.e. . Note that those operators can be used to define distribution functions, i.e.
and similarly
All that seems too theoretical ? An application can be the case of the sum, i.e. , in that case is the distribution of sum of two random variables with marginal distributions and , and copula . Thus, is simply the convolution of two distributions,
The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator , then, for any copula , one can find a lower bound for
as well as an upper bound
Those inequalities come from the fact that for all copula , , where is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…
In the case of the sum of two random variables, with marginal distributions and , bounds for the distribution of the sum , where and , can be written
for the lower bound, and
for the upper bound. And those bounds are sharp, in the sense that, for all , there is a copula such that
and there is (another) copula such that
Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all let denotes its generalized inverse, left continuous, and let denote the set of those quantile functions. Define then the dual versions of our operators,
and
Those definitions are really dual versions of the previous ones, in the sense that and .
Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is
while the upper bound is
A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .
> F=function(x) plnorm(x,0,1) > G=function(x) plnorm(x,0,1) > n=100 > X=seq(0,10,by=.05) > Hinf=Hsup=rep(NA,length(X)) > for(i in 1:length(X)){ + x=X[i] + U=seq(0,x,by=1/n); V=x-U + Hinf[i]=max(pmax(F(U)+G(V)-1,0)) + Hsup[i]=min(pmin(F(U)+G(V),1))}
If we plot those bounds, we obtain
> plot(X,Hinf,ylim=c(0,1),type="s",col="red") > lines(X,Hsup,type="s",col="red")
But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here
> Finv=function(u) qlnorm(u,0,1) > Ginv=function(u) qlnorm(u,0,1)
The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance
The idea is to consider and , and the bound for the quantile function at point is then
The code to compute those bounds, for a given is here
> n=1000 > Qinf=Qsup=rep(NA,n-1) > for(i in 1:(n-1)){ + J=0:i + Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n)) + J=(i-1):(n-1) + Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n)) + }
Here we have (several s were considered, so that we can visualize the convergence of that numerical algorithm),
Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.