The majority of flows in nature are turbulent. This raises the question, is it necessary to represent turbulence in computational models of flow processes? Unfortunately, there is no simple answer to this question, and the modeler must exercise some engineering judgment. The following remarks cover some things to consider when faced with this question.
Definitions and Orders of Magnitude
The possibility that turbulence may occur is generally measured by the flow Reynolds number:
where ρ is fluid density and μ is the dynamic viscosity of the fluid. The parameters L and U are a characteristic length and speed for the flow. Obviously, the choice of L and U are somewhat arbitrary, and there may not be single values that characterize all the important features of an entire flow field. The important point to remember is that Re is meant to measure the relative importance of fluid inertia to viscous forces. When viscous forces are negligible the Reynolds number is large.
A good choice for L and U is usually one that characterizes the region showing the strongest shear flow, that is, where viscous forces would be expected to have the most influence.
Roughly speaking, a Reynolds number well above 1000 is probably turbulent, while a Reynolds number below 100 is not. The actual value of a critical Reynolds number that separates laminar and turbulent flow can vary widely depending on the nature of the surfaces bounding the flow and the magnitude of perturbations in the flow.
In a fully turbulent flow a range of scales exist for fluctuating velocities that are often characterized as collections of different eddy structures. If L is a characteristic macroscopic length scale and l is the diameter of the smallest turbulent eddies, defined as the scale on which viscous effects are dominant, then the ratio of these scales can be shown to be of order L/l≈Re3/4. This relation follows from the assumption that, in steady-state, the smallest eddies must dissipate turbulent energy by converting it into heat.
From the above relation for the range of scales it is easy to see that even for a modest Reynolds number, say Re=104, the range spans three orders of magnitude, L/l=103. In this case, the number of control volumes needed to resolve all the eddies in a three-dimensional computation would be greater than 109. Numbers of this size are well beyond current computational capabilities. For this reason, considerable effort has been devoted to the construction of approximate models for turbulence.
We cannot describe turbulence modeling in any detail in this short article. Instead, we will simply make some basic observations about the types of models available. Be forewarned, however, that no models exist for general use. Every model must be employed with discretion and its results cautiously treated.
The original turbulence modeler was Osborne Reynolds. Anyone interested in this subject should read his groundbreaking work (Phil. Trans. Royal Soc. London, Series A, Vol.186, p.123, 1895). Reynolds’s insights and approach were both fundamental and practical.
The Pseudo-Fluid Approximation
In a fully turbulent flow it is sometimes possible to define an effective turbulent viscosity, μeff, that roughly approximates the turbulent mixing processes contributing to a diffusion of momentum (and other properties). Thinking of a turbulent flow as a pseudo-fluid having increased viscosity leads to the observation that the effective Reynolds number for a turbulent flow is generally less than 100:
This observation is particularly useful because it suggests a simple way to approximate some turbulent flows. In particular, when the details of the turbulence are not important, but the general mixing behavior associated with the turbulence is, it is often possible to use an effective turbulent (eddy) viscosity in place of the molecular viscosity. The effective viscosity can often be expressed as
where α is a number between 0.02 and 0.04. This expression works well for the turbulence associated with plane and cylindrical jets entering a stagnant fluid. The effective Reynolds number associated with this model is Re=1/α, a number between 25 and 50.
While this model is often adequate for predicting the gross features of a turbulent flow, it may not be suitable for predicting local details. For example, it would predict a parabolic flow (i.e., laminar) profile in a pipe instead of the measured logarithmic profile.
Local Viscosity Model
The next level of complexity beyond a constant eddy viscosity is to compute an effective viscosity that is a function of local conditions. This is the basis of Prandtl’s mixing-length hypothesis where it is assumed that the viscosity is proportional to the local rate of shear. The proportionality constant has the dimensions of a length squared. The square root of this constant is referred to as the “mixing length.”
This model offers an improvement over a simple constant viscosity. For example, it predicts the logarithmic velocity profile in a pipe. However, it is not used much because it doesn’t account for important transport effects.
Turbulence Transport Models
For practical engineering purposes the most successful computational models have two or more transport equations. A minimum of two equations is desirable because it takes two quantities to characterize the length and time scales of turbulent processes. The use of transport equations to describe these variables allows turbulence creation and destruction processes to have localized rates. For instance, a region of strong shear at the corners of a building may generate strong eddies, while little turbulence is generated in the building’s wake region. The strong mixing observed in the wakes of buildings (or automobiles and airplanes) is caused by the advection of upstream generated eddies into the wake. Without transport mechanisms, turbulence would have to instantly adjust to local conditions, implying unrealistically large creation and destruction rates.
Nearly all transport models invoke one or more gradient assumptions in which a correlation between two fluctuating quantities is approximated by an expression proportional to the gradient of one of the terms. This captures the diffusion-like character of turbulent mixing associated with many small eddy structures, but such approximations can lead to errors when there is significant transport by large eddy structures.
Large Eddy Simulation
Most models of turbulence are designed to approximate a smoothed out or time-averaged effect of turbulence. An exception is the Large Eddy Simulation model (or Subgrid Scale model). The idea behind this model is that computations should be directly capable of modeling all the fluctuating details of a turbulent flow except for those too small to be resolved by the grid. The unresolved eddies are then treated by approximating their effect using a local eddy viscosity. Generally, this eddy viscosity is made proportional to the local grid size and some measure of the local flow velocity, such as the magnitude of the rate of strain.
Such an approach might be expected to give good results if the unresolved scales are small enough, for example, in the viscous sub-range. Unfortunately, this is still an uncomfortably small size. When these models are used with a minimum scale size that is above the viscous sub-range, they are then referred to as Coherent Structure Capturing models.
The advantage of these more realistic models is that they provide information not only about the average effects of turbulence but also about the magnitude of fluctuations. But, this advantage is also a disadvantage, because averages must actually be computed over many fluctuations, and some means must be provided to introduce meaningful fluctuations at the start of a computation and at boundaries where flow enters the computational region.
Turbulence from an Engineering Perspective
We have seen that it is probably not reasonable to attempt to compute all the details of a turbulent flow. Furthermore, from the perspective of most applications, it’s not likely that we would be interested in the local details of individual fluctuations. The question then is how should we deal with turbulence, when should we employ a turbulence model, and how complex should that model be?
Experimental observations suggest that many flows become independent of Reynolds number once a certain minimum value is exceeded. If this were not so, wind tunnels, wave tanks, and other experimental tools would not be as useful as they are. One of the principal effects of a Reynolds number change is to relocate flow separation points. In laboratory experiments this fact sometimes requires the use of trip wires or other devices to induce separation at desired locations. A similar treatment may be used in a numerical simulation.
Most often a simulation is done to determine the dominant flow patterns that develop in some specified situation. These patterns consist of the mean flow and the largest eddy structures containing the majority of the kinetic energy of the flow. The details of how this energy is removed from the larger eddies and dissipated into heat by the smallest eddies may not be important. In such cases the dissipation mechanisms inherent in numerical methods may alone be sufficient to produce reasonable results. In other cases it is possible to supply additional dissipation with a simple turbulence model such as a constant eddy viscosity or a mixing length assumption.
Turbulence transport equations require more CPU resources and should only be used when there are strong, localized sources of turbulence and when that turbulence is likely to be advected into other important regions of the flow.
When there is reason to seriously question the results of a computation, it is always desirable to seek experimental confirmation.
An excellent introduction to fluid turbulence can be found in the book Elementary Mechanics of Fluids by Hunter Rouse, Dover Publications, Inc., New York (1978).