It is a fact of life that numerical approximations to differential equations may exhibit unstable behavior. That is, computational results may include exponentially growing and sometimes oscillating features that bear no relation to the solution of the original differential equation. This type of behavior is referred to as a computational instability.

It is important to distinguish computational instability from physical instabilities, which may occur in some physical problems. In practice, the distinction between these types of unstable behavior is not always obvious. Certainly when a computed solution has features that change sign with each step of the finite-difference increments (in time or space) the instability is probably associated with the numerical approximation. On the one hand, changes in sign during time advancement can often be cured by decreasing the size of the time increment. On the other hand, changes in sign with space increments might indicate a lack of computational resolution, which a reduction in the size of the space increments might eliminate.

In general, it would be desirable to have a means of predicting when computational as opposed to physical instabilities can occur. Unfortunately, no general method has yet been devised. There are, however, some analysis techniques that often provide enough guidance to avoid the majority of frequently encountered computational instabilities. Some of these techniques will be described in this introductory article.

## A Simple Example of Computational Instability

The following simple example provides a useful introduction to the subject of computational stability. Consider the simple rate equation,

(1) \(\displaystyle \frac{\delta Q}{dt}=-AQ,\)

where Q=Q_{0} at t=0. The exact solution of this equation is an exponential decay Q=Q0exp(-At).

For a numerical approximation, suppose time is discretized into equal steps of size δt and that t=n δt, where n is an integer. Numerically computed values of Q will the denoted by a lower case q to distinguish them from the exact solution and an exponent n will be used to denote the time step, so that qn is the approximation to Q(n δt). A finite-difference approximation to Eq.1 can be written as,

(2) \(\displaystyle {{q}^{n+1}}-{{q}^{n}}=-A\delta t{{q}^{n}},\)

where q_{0}=Q_{0. }Because of its simplicity there is an exact solution to Eq.2,

(3) \(\displaystyle {{q}^{n}}={{\left( 1-A\delta t \right)}^{n}}{{q}_{0}}\)

Now the interesting thing is to observe how the values of qn evolve with n as the value of Aδt is changed. The values depend on the magnitude of the bracket (1-Aδt), which is raised to the power of n, Fig. 1.

When Aδt is greater than 2.0, the bracket is negative and has a magnitude that is greater than 1.0, so qn changes sign alternately with increasing n and its magnitude is rapidly growing. This is a clear case of computational instability in which the numerical results provide no useful information about the evolution of q. In contrast, if the value of Aδt is less than 1.0, the bracket is positive and has a magnitude less than 1.0. In this case the evolution of qn is a decent approximation of the exponential decay in the exact solution, in which the values lie a little below the exact values. Intermediate values of Aδt (between 1.0 and 2.0) result in qn values that decay, but do so with an oscillation in sign each time step. Based on these observations we can say that the finite-difference approximation Eq. 2 is a stable and reasonable approximation to the exact equation provide the time step δt is limited by the “stability condition” Aδt<1.0.

## Linear Stability Analysis of von Neumann

One of the 20th century’s most famous mathematicians, John von Neumann played an important role in the development of the modern computer. So it may be no surprise that he also pioneered analytical techniques for studying the properties of finite-difference equations. His approach to evaluating the computational stability of a difference equation employs a Fourier series method and is best described in References 1 and 2.

The approach von Neumann used is based on two assumptions. First, that the difference equation can be linearized with respect to a small perturbation in the solution. Second, that there exists a set of overlapping regions in each of which the coefficients of terms in the difference equation for the perturbation can be considered locally constant. When these conditions are satisfied a Fourier series can be used in each region to compute the local behavior of the perturbation.

The solution for the perturbation Pjn, at discrete location j and time step n, is sought as a sum of Fourier terms having the form

(4) \(\displaystyle P_{j}^{n}\propto {{r}^{n}}{{e}^{ikx}},\)

where xj is the location of the jth finite-difference point, k is the wave number of the mode and r is a time amplitude. Because of the linear nature of the difference equation, it is only necessary to consider the behavior of a typical mode. Substitution of the typical mode into the linearized difference equation leads to an algebraic equation for the time amplitude r that must be satisfied if there is to be solution of the form postulated. A full solution for the perturbation could then be constructed from a sum of Fourier modes over a range of wave numbers.

Numerical stability implies that as time increases (i.e., with increasing n) the magnitude of each mode must not grow unboundedly, and this means that the magnitude of r must be less than or equal to unity.

## An Example of Linear Stability Analysis

An example will make von Neumann’s technique clear. For illustration purposes an advection-diffusion equation is a good test case,

(5) \(\displaystyle \frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x}=\nu \frac{{{\partial }^{2}}u}{\partial {{x}^{2}}},\)

which describes the convection and diffusion of a function u(x,t). The convection velocity c and diffusion coefficient ν are constant. Solutions of this equation are known to be well-behaved (i.e., physically stable).

A simple finite-difference approximation using constant time steps δt and spatial steps δx for \(\displaystyle u_{j}^{n}=\text{ }u\left( {j\delta x,n\delta t} \right)\) is

(6) \(\displaystyle \frac{u_{j}^{n+1}-u_{j}^{n}}{\delta t}=-\frac{c}{2\delta x}\left( u_{j+1}^{n}-u_{j-1}^{n} \right)+\frac{\nu }{\delta {{x}^{2}}}\left( u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n} \right)\)

Inserting von Neumann’s mode Eq. 4 to this equation, with xj=jδx, and solving for r gives

(7) \(\displaystyle r=1-\left( \frac{ic\delta t}{\delta x} \right)\sin \left( k\delta x \right)-\left( \frac{2\nu \delta t}{\delta {{x}^{2}}} \right)\left[ 1-\cos \left( k\delta x \right) \right].\)

Because this involves real and imaginary terms the square of the magnitude of r is given by the sum of the squares of the real and imaginary parts,

(8) \(\displaystyle {{r}^{2}}={{\left( 1-\frac{2\nu \delta t}{\delta {{x}^{2}}}\left( 1-\cos \left( k\delta x \right) \right) \right)}^{2}}+{{\left( \frac{c\delta t}{\delta x}\sin \left( k\delta x \right) \right)}^{2}}.\)

From this result, limits on ν, c, and δt can be determined that ensure that r remains less than or equal to 1.0 the requirement for not having an unbounded solution of the form Eq. 4.

In general, the extreme values for r occur when the sine and cosine factors are at their extreme values of 0.0, 1.0 or -1.0. For instance, if kδx=0 then r=1 and this mode will be neutrally stable, neither increasing nor decreasing with increasing values of n (i.e., rn=1). When kδx=π the value of r2 is (1-4νδt/δx2)2 and this implies the stability condition

(9) \(\displaystyle \frac{2\nu \delta t}{\delta {{x}^{2}}}\le 1.\)

And finally, when kδx=π/2 the stability condition r^{2}

(10) \(\displaystyle {{r}^{2}}={{\left( 1-\frac{2\nu \delta t}{\delta {{x}^{2}}} \right)}^{2}}+{{\left( \frac{c\delta t}{\delta x} \right)}^{2}}\le 1.\)

Since the left side of the inequality is a sum of squares, neither term can exceed 1 without the left side exceeding 1.0, thus we have two conditions for stability solutions

(11) \(\displaystyle \frac{c\delta t}{\delta x}\le 1\) and \(\displaystyle \frac{2\nu \delta t}{\delta {{x}^{2}}}\le 2.\)

The first condition is a new stability condition, but the second condition has already been made more restrictive in Eq. 9. In addition to these conditions, Eq. 10 admits of one more condition that can be evaluated by expanding the first term and rearranging the inequality,

(12) \(\displaystyle \frac{{{c}^{2}}\delta dt}{2\nu }\le 2-\frac{2\nu \delta t}{\delta {{x}^{2}}}.\)

The minimum value on the right side, according to Eq. 9, is 1.0, which finally leads to a condition for the minimum size of diffusion ν. In total, there are three stability conditions to be satisfied,

(13) \(\displaystyle \nu \ge \frac{{{c}^{2}}\delta t}{2},\) \(\displaystyle \frac{c\delta t}{\delta x}\le 1,\) \(\displaystyle \frac{2\nu \delta t}{\delta {{x}^{2}}}\le 1.\)

When these three conditions are satisfied no Fourier modes will grow as n increases and so no unbounded instability develops. The first condition, in particular, is interesting because it says there will be unstable solutions for any non-zero time step unless there is a sufficient amount of diffusion. In contrast, the remaining conditions are simply restrictions on the maximum time step size.

At this point it’s worthwhile observing that a violation of the first condition in Eq. 13 leads to a solution with purely exponential growth, while violation of either of the second two conditions exhibits a growth that not only grows exponentially, but also changes sign with each increment in time. The reason for this behavior and other useful information about finite-difference equations is explained in the second CFD-101 article on stability, Heuristic Analysis.

In summary, the von Neumann type of Fourier analysis of finite-difference equations is quite useful provided the equation(s) are linear and have constant coefficients within a set of overlapping regions. For more general situations other methods must be sought to analyze computational stability.

## End Note

Readers who may be a bit rusty with exponentials and imaginary numbers may find the following set of identities useful for performing a von Neumann type analysis

\(\displaystyle \sin \theta =\frac{{{e}^{i\theta }}-{{e}^{-i\theta }}}{2i},\) \(\displaystyle \cos \theta =\frac{{{e}^{i\theta }}+{{e}^{-i\theta }}}{2}\)

\(\displaystyle \sin \theta =2\sin \left( \frac{\theta }{2} \right)\cos \left( \frac{\theta }{2} \right),\) \(\displaystyle \cos \theta ={{\cos }^{2}}\left( \frac{\theta }{2} \right)-{{\sin }^{2}}\left( \frac{\theta }{2} \right)\)

\(\displaystyle {{\sin }^{2}}\left( \frac{\theta }{2} \right)=\frac{1-\cos \theta }{2},\) \(\displaystyle {{\cos }^{2}}\left( \frac{\theta }{2} \right)=\frac{1+\cos \theta }{2}\)

## References

- B.G. O’Brien, M.A. Hyman and S. Kaplan, “A Study of the Numerical Solution of Partial Differential Equations,” J. Math. Phys. 29, 223 (1951).
- J. von Neumann, Collected Works, IV, Macmillan Co., New York, NY (1963).