There is no known simpler form and we have to work with this sum as it is. This is an infinite sum. How can we compute the value of this infinite sum numerically?
In other words, sum up many of the terms and you should be close to the actual infinite sum.
This is a crude approach. The answer is not wrong (numerically) but certainly we should understand why adding up that many terms works. Also, we could have added more terms than necessary… or not enough.
So how can we compute this sum that guarantees some level of precision while at the same time not adding any more terms than necessary? Unfortunately I don’t recall how to do this from my numerical methods classes, but I believe I have found an approach that works well enough for my purposes.
Geometric Sums
An infinite sum is defined as the limit of a sequence of finite sums. Let , where is some sequence (we can have or , for example; in the case of the Kolmogorov distribution, we had ). Then, by definition, .
My attempt to compute this sum simply amounts to trying to find an such that the difference between and is no larger than machine precision: that is, I want where is the machine precision of the computer.
Since we don’t know what is, we can instead decide that machine convergence occurs when ; that is, when one summand and the next summand are numerically indistinguishable. Since , this criterion is the same as requiring that .
Every sum that converges requires the condition , so this criterion always yields an that gives “numerical convergence”. Of course, any Calculus II student who was paying attention in their class can tell you that not all infinite sums with summands going to zero converges, with the classic counterexample being the Harmonic series. So this approach would claim that sums that diverge are numerically convergent, which is bad. We cannot even expect this method to work in cases where the sum does converge, but it does so slowly (see a later example). However, in some cases this approach may be okay.
Take the case of a geometric series:
with less than 1. These sums converge; in fact, mathematicians consider them as converging quickly. We also have a formula for what the sum is:
After some algebra, we can quickly find a rule for determining how many summands we need to attain “numerical convergence”:
We can see that in action with some R examples:
.Machine$double.eps # Numerical accuracy of this system
[1] 2.220446e-16
log(.Machine$double.eps)/log(0.5)
[1] 52
sum(0.5^(0:53))
[1] 2
# 2 is the correct answer, but the interpreter rounds its output; is the answer
# actually 2?
sum(0.5^(0:53)) - 2 == 0
[1] TRUE
sum(0.5^(0:52)) - 2 == 0
[1] FALSE
Caveats
This method, though, should be met with suspicion. For instance, it will not work for a slowly convergent sum. Take for example . If you apply the above technique, then “numerical convergence” is achieved for