32) Numerical Intergration#
Last time:
Splines conditioning
Boundary Value Problems
Higher dimensions
Today:
Numerical Integration
1.1 Midpoint rule
1.2 Trapezoid ruleExtrapolation
Polynomial interpolation for integration
using LinearAlgebra
using Plots
default(linewidth=4, legendfontsize=12)
function vander_legendre(x, k=nothing)
if isnothing(k)
k = length(x) # Square by default
end
m = length(x)
Q = ones(m, k)
Q[:, 2] = x
for n in 1:k-2
Q[:, n+2] = ((2*n + 1) * x .* Q[:, n+1] - n * Q[:, n]) / (n + 1)
end
Q
end
CosRange(a, b, n) = (a + b)/2 .+ (b - a)/2 * cos.(LinRange(-pi, 0, n))
CosRange (generic function with 1 method)
1. Numerical Integration#
We’re interested in computing definite integrals
and will usually consider finite domains
Observations:
Cost: (usually) how many times we need to evaluate the function
Accuracy
compare to a reference value
compare to the same method using more evaluations
Consideration: how smooth is
?
Some test functions#
F_expx(x) = exp(2x) / (1 + x^2)
f_expx(x) = 2*exp(2x) / (1 + x^2) - 2x*exp(2x)/(1 + x^2)^2
F_dtanh(x) = tanh(x)
f_dtanh(x) = cosh(x)^-2
integrands = [f_expx, f_dtanh]
antiderivatives = [F_expx, F_dtanh]
tests = zip(integrands, antiderivatives)
zip(Function[f_expx, f_dtanh], Function[F_expx, F_dtanh])
plot(integrands, xlims=(-1, 1))
Fundamental Theorem of Calculus#
Let
Method of Manufactured Solutions#
Analytically integrating an arbitrary function is hard
tends to require trickery (e.g., u substitution)
not always possible to express in closed form (e.g., elliptic integrals)
sometimes needs special functions
don’t know when to give up
On the other hand, analytic differentation
involves straightforward application of the product rule and chain rule.
So if we just choose an arbitrary function
compute
numerically integrate
and compare to
Newton-Cotes methods#
Main idea: Approximate
1.1 Midpoint Riemann Sum method#
function fint_midpoint(f, a, b; n=20)
dx = (b - a) / n
x = LinRange(a + dx/2, b - dx/2, n)
sum(f.(x)) * dx
end
for (f, F) in tests
a, b = -2, 2
I_num = fint_midpoint(f, a, b, n=20)
I_analytic = F(b) - F(a)
println("$f: $I_num error=$(I_num - I_analytic)")
end
f_expx: 10.885522849146847 error=-0.03044402970425253
f_dtanh: 1.9285075531458646 error=0.00045239299423083246
How does the accuracy change as we use more points?
function plot_accuracy(fint, tests, ns; ref=[1,2])
a, b = -2, 2
p = plot(xscale=:log10, yscale=:log10, xlabel="n", ylabel="error")
for (f, F) in tests
Is = [fint(f, a, b, n=n) for n in ns]
Errors = abs.(Is .- (F(b) - F(a)))
scatter!(ns, Errors, label=f)
end
for k in ref
plot!(ns, ns.^(-1. * k), label="\$n^{-$k}\$")
end
p
end
plot_accuracy(fint_midpoint, tests, 2 .^ (0:10))
1.2 Trapezoid Rule#
The trapezoid rule uses piecewise linear functions on each interval.
Can you get to the same result using a geometric argument?
What happens when we sum over a bunch of adjacent intervals?
Code for the Trapezoid Rule:
function fint_trapezoid(f, a, b; n=20)
dx = (b - a) / (n - 1)
x = LinRange(a, b, n)
fx = f.(x)
fx[1] /= 2
fx[end] /= 2
sum(fx) * dx
end
plot_accuracy(fint_trapezoid, tests, 2 .^ (1:10))
2. Extrapolation#
Let’s switch our plot around to use
function plot_accuracy_h(fint, tests, ns; ref=[1,2])
a, b = -2, 2
p = plot(xscale=:log10, yscale=:log10, xlabel="h", ylabel="error",
legend=:bottomright)
hs = (b - a) ./ ns
for (f, F) in tests
Is = [fint(f, a, b, n=n) for n in ns]
Errors = abs.(Is .- (F(b) - F(a)))
scatter!(hs, Errors, label=f)
end
for k in ref
plot!(hs, .1 * hs.^k, label="\$h^{$k}\$")
end
p
end
plot_accuracy_h (generic function with 1 method)
plot_accuracy_h(fint_midpoint, tests, 2 .^ (2:10))
The trapezoid rule with
Let
for some as-yet unknown constant
If we can determine
For sufficiently small
Subtracting these two lines, we have
which can be solved for
Substituting back into the first equation, we solve for
This is called Richardson extrapolation.
Extrapolation code:
function fint_richardson(f, a, b; n=20)
n = div(n, 2) * 2 + 1
h = (b - a) / (n - 1)
x = LinRange(a, b, n)
fx = f.(x)
fx[[1, end]] /= 2
I_h = sum(fx) * h
I_2h = sum(fx[1:2:end]) * 2h
I_h + (I_h - I_2h) / 3
end
plot_accuracy_h(fint_richardson, tests, 2 .^ (2:10), ref=1:5)
Observations:
we now have a sequence of accurate approximations
it’s possible to apply extrapolation recursively
works great if you have a power of 2 number of points
and your function is nice enough
F_sin(x) = sin(30x)
f_sin(x) = 30*cos(30x)
plot_accuracy_h(fint_richardson, zip([f_sin], [F_sin]), 2 .^ (2:10), ref=1:5)
3. Polynomial interpolation for integration#
x = LinRange(-1, 1, 100)
P = vander_legendre(x, 10)
plot(x, P, legend=:none)
Main ideas:
Sample the function
at some pointsFit a polynomial through those points
Return the integral of that interpolating polynomial
Questions:
What points do we sample on?
How do we integrate the interpolating polynomial?
Recall that the Legendre polynomials
Integration using Legendre polynomials#
function plot_accuracy_n(fint, tests, ns; ref=[1,2])
a, b = -2, 2
p = plot(xscale=:log10, yscale=:log10, xlabel="n", ylabel="error",
legend=:bottomright)
for (f, F) in tests
Is = [fint(f, a, b, n=n) for n in ns]
Errors = abs.(Is .- (F(b) - F(a)))
scatter!(ns, Errors, label=f)
end
p
end
plot_accuracy_n (generic function with 1 method)
function fint_legendre(f, a, b; n=20)
x = CosRange(-1, 1, n)
P = vander_legendre(x)
x_ab = (a+b)/2 .+ (b-a)/2*x
c = P \ f.(x_ab)
(b - a) * c[1]
end
fint_legendre(x -> 1 + x, -1, 1, n=2)
2.0
p = plot_accuracy_h(fint_legendre, tests, 4:20, ref=1:5)