Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Numerical differentiation is a method of approximating the derivative of a function \(f\) at particular value \(x\). Often, particularly in physics and engineering, a function may be too complicated to merit the work necessary to find the exact derivative, or the function itself is unknown, and all that is available are some points \(x\) and the function evaluated at those points. Numerical differentiation, of which finite differences is just one approach, allows one to avoid these complications by approximating the derivative.
The most straightforward and simple approximation of the first derivative is defined as:
\( f^\prime (x) \approx \frac{f(x + h) – f(x)}{h} \qquad h > 0 \)The approximation error of this equation can be found by performing a Taylor expansion of \(f(x + h)\) about \(x\), which gives:
\( f(x + h) = f(x) + hf^\prime (x) + \frac{h^2}{2} f^{\prime\prime}(\epsilon) \)We then rearrange the expansion and substitute \(f^\prime\) with the earlier approximation to arrive at an exact form of the approximation equation:
\( f^\prime (x) = \frac{f(x + h) – f(x)}{h} – \frac{h^2}{2} f^{\prime\prime}(\epsilon) \)Finite Differences Methods for Numerical Differentiation
There are three primary differencing techniques, forward, backward, and central, for approximating the derivative of a function at a particular point. Note there are more accurate methods and algorithms for approximating derivatives; however, due to their relative ease of computation and accuracy, differencing methods are still a viable tool for performing numerical differentiation.
The forward difference approximation is the same as the earlier approximation:
\( f^\prime (x) \approx \frac{f(x + h) – f(x)}{h} \)The backward difference approximation is based on the values of the function evaluated at \(x – h\) and \(x\), defined as:
\( f^\prime \approx \frac{f(x) – f(x – h)}{h} \)The central (or centered) difference approximation is essentially an average of the forward and backward differencing methods:
\( f^\prime \approx \frac{f(x + h) – f(x – h)}{2h} \)Error of Approximations
It was shown earlier the forward and backward difference approximations have an error term of \(\frac{h^2}{2} f^{\prime\prime}(\epsilon)\) where \(\epsilon \in (x, x + h)\). The error term of the central difference method can be found by performing Taylor expansions on \(f(x + h)\) and \(f(x – h)\) about \(x\).
\( f(x + h) = f(x) + hf^\prime (x) + \frac{h^2}{2} f^{\prime\prime}(x) + \frac{h^3}{6} f^{\prime\prime\prime}(\epsilon) \qquad \epsilon \in (x, x + h) \)
\( f(x – h) = f(x) – hf^\prime (x) + \frac{h^2}{2} f^{\prime\prime}(x) – \frac{h^3}{6} f^{\prime\prime\prime}(\epsilon) \qquad \epsilon \in (x – h, x) \)
These two equations are then subtracted, and then solving for \(f^\prime (x)\) leads to the central difference method:
\( f^\prime \approx \frac{f(x + h) – f(x – h)}{2h} – \frac{h^2}{12}\bigg({f^{\prime\prime\prime}(\epsilon_1) + f^{\prime\prime\prime}(\epsilon_2)\bigg)} \)The central difference approximation is more accurate than forward and backward differences and should be used whenever possible. The three difference methods report the same approximations of the following example as the function and its derivative are rather simple; however, it is still best to apply the central difference approximation in actual practice.
Differencing Approximations Example in R
Consider the following set of data points:
\(x\) | \(f(x)\) |
---|---|
0.0 | 0.00000 |
0.2 | 0.74140 |
0.4 | 1.37180 |
x <- c(0.0, 0.2, 0.4) fx <- c(0.00000, 0.74140, 1.3718)
The function is unknown; however, we would still like to approximate the function’s derivative at the values of \(x\). The following function combines the forward and backward differencing methods to approximate the derivative of the function at each value of \(x\).
finite.differences <- function(x, y) { if (length(x) != length(y)) { stop('x and y vectors must have equal length') } n <- length(x) # Initialize a vector of length n to enter the derivative approximations fdx <- vector(length = n) # Iterate through the values using the forward differencing method for (i in 2:n) { fdx[i-1] <- (y[i-1] - y[i]) / (x[i-1] - x[i]) } # For the last value, since we are unable to perform the forward differencing method # as only the first n values are known, we use the backward differencing approach # instead. Note this will essentially give the same value as the last iteration # in the forward differencing method, but it is used as an approximation as we # don't have any more information fdx[n] <- (y[n] - y[n - 1]) / (x[n] - x[n - 1]) return(fdx) }
Using the function, perform each differencing method on the \(x\) values and the evaluated points \(f(x)\).
finite <- finite.differences(x, fx) finite ## [1] 3.707 3.152 3.152
Let’s assume the data from earlier was taken from the function \(e^x – 2x^2 + 3x – 1\).
f <- function(x) { return(exp(x) - 2 * x^2 + 3 * x - 1) }
Using the central differencing method and the given function, we can approximate the derivative at each point. As \(h\) approaches 0, the approximation becomes more accurate. The following function approximates the values of the derivative at the given points at several different values of \(h\) to demonstrate how the approximations converge.
central.difference <- function(f, x) { steps <- c(0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001) n <- length(steps) fdx <- vector(length = n) for (h in 1:n) { fdx[h] <- (f(x + 0.5 * steps[h]) - f(x - 0.5 * steps[h])) / steps[h] } return(fdx) }
Print the approximations of the derivative.
for (i in x) { print(central.difference(f, i)) } ## [1] 4.000417 4.000004 4.000000 4.000000 4.000000 4.000000 4.000000 ## [1] 3.421912 3.421408 3.421403 3.421403 3.421403 3.421403 3.421403 ## [1] 2.892446 2.891831 2.891825 2.891825 2.891825 2.891825 2.891825
We see the approximations converge rather quickly as \(h\) approaches 0, giving the following approximations:
\( f^\prime (0.0) \approx 4.00000 \qquad f^\prime (0.2) \approx 3.421403 \qquad f^\prime (0.4) \approx 2.891825 \)The derivative of the function is \(e^x – 4x + 3\). We shall compare our approximated values of with the actual values of the derivative at \(x\) to see how the central differencing method faired.
fdx <- function(x) { return(exp(x) - 4 * x + 3) }
For fun, plot both the function and its derivative to get a visualization of where the function’s derivative is at the values of \(x\).
ggplot(data = data.frame(x = 0), mapping = aes(x = x)) + stat_function(fun = f, size = 1.25, alpha = 0.75) + stat_function(fun = fdx, size = 1.25, color = 'blue', alpha = 0.75) + xlim(-3,3)
Print the calculated derivative’s values for each \(x\) and compare them with our previously approximated values.
actual <- vector(length = length(x)) central.approx <- c(4.00000, 3.421403, 2.891825) for (i in 1:length(x)) { actual[i] <- fdx(x[i]) } approx.df <- data.frame(cbind(actual, central.approx, actual - central.approx, finite, actual - finite)) colnames(approx.df) <- c('Actual Values', 'Central Difference Approx', 'Central Differences Error', 'Finite Differences', 'Finite Differences Error') approx.df ## Actual Values Central Difference Approx Central Differences Error ## 1 4.000000 4.000000 0.000000e+00 ## 2 3.421403 3.421403 -2.418398e-07 ## 3 2.891825 2.891825 -3.023587e-07 ## Finite Differences Finite Differences Error ## 1 3.707 0.2930000 ## 2 3.152 0.2694028 ## 3 3.152 -0.2601753
References
Burden, R. L., & Faires, J. D. (2011). Numerical analysis (9th ed.). Boston, MA: Brooks/Cole, Cengage Learning.
Weisstein, Eric W. “Numerical Differentiation.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/NumericalDifferentiation.html
The post Numerical Differentiation with Finite Differences in R appeared first on Aaron Schlegel.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.