11) Stability#
Last time#
Newton’s method
Convergence of fixed-point method
Different formulation of Newton’s method
Today#
Forward and backward stability
Lax equivalence theorem
Recap and Q/A
using Plots
default(linewidth=4, legendfontsize=12)
1. Stability#
Activity
Read What is Numerical Stability? and discuss in small groups
Share insights in class discussion
Source: FNC: backward error
(Forward) Stability#
“nearly the right answer to nearly the right question”
Backward Stability#
“exactly the right answer to nearly the right question”
Note:
Every backward stable algorithm is (\(\implies\)) stable.
Not every stable algorithm is backward stable.
The difference is in the focus: forward analysis is concerned with what the method reaches, while backward analysis looks at the problem being solved (which is why we can speak of ill-conditioned methods and ill-conditioned problems).
In a backward stable algorithm the errors introduced during the algorithm have the same effect as a small perturbation in the data.
If the backward error is the same size as any uncertainty in the data then the algorithm produces as good a result as we can expect.
Accuracy of backward stable algorithms (Theorem)#
A backward stable algorithm for computing \(f(x)\) has relative accuracy
In practice, it is rarely possible for a function to be backward stable when the output space is higher dimensional than the input space.
2. Lax equivalence theorem#
Consistency: describes how well the numerical scheme aproximates the PDE (if the Finite Difference (FD) discretization is at least of order 1 \(\implies\) it is consistent — The residual reduces under grid refinement). We will see FD in more details when we cover numerical differentiation.
Stability: Numerical stability concerns how errors introduced during the execution of an algorithm affect the result. It is a property of an algorithm rather than the problem being solved (check Higham’s blog again). This gets subtle for problems like incompressible materials or contact mechanics.
Convergence: When the solution of the approximated equation approaches the actual solution of the continuous equation.
Consistency + Convergence \(\implies\) Stability
Consistency + Stability \(\implies\) Convergence
Hence,
Lax equivalence theorem: Consistency + Convergence \(\iff \) Stability
3. Recap and Q/A#
All good, but in praxtice?#
These are foundational theoretical tools, and can often be tested in practice, but
it doesn’t establish that the code works
it’s not possible to prove convergence for many real-world problems
there are open research questions about whether numerous important problems are even well-posed
Empirical measurement of convergence#
Convergence on a problem with an analytical solution:
These can be great, but analytical solutions of nonlinear equations are extremely hard to find.
Such solutions usually have many symmetries and rarely activate all terms.
Self-convergence:
Just set up a problem and solve it on a sequence of meshes, use a refined solution as reference (perhaps using Richardson extrapolation), then plot error.
You’ve checked that the code convergences to some solution, not the correct solution. (You could have a factor of 2 typo, or more serious mistakes.)
Method of Manufactured Solutions (MMS):#
Errors that affect solution accuracy can easily be detected.
There are some technical issues with singular or under-resolved problems (shocks, material discontinuities).