It is well known that explicit (e.g., forward in time) finite-difference approximations used to solve time-evolution equations are subject to limits on the size of the time increment. In this type of approximation it is assumed that the values of all dependent variables are advanced in time through a succession of small time intervals. Knowing the variable values at all steps, up to and including step n, they can then be used to estimate the rates-of-change of the variables. This makes it possible to estimate what the dependent variable values will be at step n+1, which corresponds to a small time δt later, tn+1=tn + δt.
The need for small δt in this type of approximation arises because the rate-of-change of a dependent variable is usually evaluated in terms of differences between that variable and the values of its immediate neighbors in space. With an increasing time-step size, a variable would be expected to be influenced not just by its immediate neighbors, but by values located increasingly farther away. In this sense, then, there must be a limit on the size of time advancement δt to ensure accuracy. In practice, this limit is also necessary to ensure computational stability.
Implicit Versus Explicit Methods
While explicit finite-difference equations are simple to use, because they offer a straightforward prescription for advancing a solution in time, their requirement for limiting the time-step size δt to ensure computational stability can lead to long computation times. To overcome this limitation, it may be desirable to resort to an implicit finite-difference method that eliminates the troublesome time-step restriction. The basic idea is to include in the approximation of the rate-of-change of a variable the values of variables at the advanced time level n+1. Since the level n+1 value that one is trying to compute now depends on those same values, the difference equations are said to be implicit.
Implicitness makes solving the equations for time evolution more difficult. It is often necessary, for example, to resort to an iterative solution process. Sometimes the iteration scheme may appear similar to an explicit time advancement (e.g., using sub-time steps), but if it works well it gives up the accuracy of an explicit method by rapidly damping transients to reach a converged solution at the advanced time step n+1 in a relatively small number of iterations.
Simple Test for Stability
There is a very simple technique that can be used to determine if an implicit finite-difference approximation is unconditionally stable. That is, if a time-step size no matter how large can be used without resulting in the type of nonsense results associated with a numerical instability. The procedure is to divide the difference equation by the largest power of δt appearing in the equation. Then take the limit of δt going to infinity. Most terms in the equation will vanish. If what remains contains no n+1 value of the variable whose solution is sought then there must be some limit on δt and the equation cannot be unconditionally stable.
On the other hand, if the remainder after δt passed to infinity can be solved for the n+1 value of the variable this implies unconditional stability. Of course, the value determined from an infinite δt may not be realistic. For instance, a negative temperature may be the solution to an implicit difference equation in the limit of infinite δt, but it is not a good solution in that negative temperatures make no physical sense.
Limiting δt Example
As a simple example of the limiting δt test consider a one-dimensional heat equation:
A combined explicit/implicit difference approximation on a uniform grid of elements of length δx is:
where θ weights the explicit and implicit mix: θ=0 providing a purely explicit approximation while θ=1 provides a fully implicit approximation. A value of θ=1/2 gives an approximation that is second order accurate in time and space.
Time-step Limit Test
If δt goes to infinity the left-hand side of the equation vanishes. The right side gives an expression for Qn+1 only if θ is non-zero, i.e., only when there is some implicitness in the approximation. This tells us that a simple explicit approximation must have a limiting δt, and that it is not unconditionally stable. On the other hand, the implicit approximation always gives an answer for Qn+1 so it is unconditionally stable. But does it give a good answer?
In this simple case we can write an exact solution to the difference equation when the left-hand side is zero in terms of the boundary values Q0 and QJ, 0<j<J.
For any value of θ other than 1 it is possible for the last term to result in a negative value to Qn+1. When θ=1 negative values are not possible and the result, in fact, is the correct asymptotic answer, i.e., a linear temperature profile connecting the boundary values.
The infinite δt limit is a simple trick that not only indicates unconditional stability or lack thereof, but can also give useful information about the quality of a solution at large time steps.