Why is My Simulation Taking So Long to Run?
A natural question (and its variations) coming from CFD users is "Why is my simulation taking so long to run" and "What can I do to make the simulation run faster" and "I ran this simulation last week and it ran much faster. Why is it taking longer this time?" There are many possible explanations for excessively long simulation times. These explanations may include insufficient memory for the mesh requested, time-step issues, and occasionally a misconception of what is being simulated vs. what was intended.
This article will explore long runtimes which are due to insufficient memory for the specified simulation. It will be assumed that a 64-bit operating system is being used so that the only consideration is the amount of memory available rather than the additional restrictions imposed by 32-bit operating systems.
Does the Simulation Fit into Your Computer’s Memory?
The memory requirements for a given simulation depend primarily on two factors—the number of active cells in the simulation, and the number of physical models activated. A rough guideline for the number of active cells is that 2 GB of memory are required for each 3.5 million active cells. This guideline is based on an isothermal, laminar simulation without any additional physical models activated. As turbulence, heat transfer, and other models are activated, the memory requirements increase accordingly. If a simulation does not fit into the available memory, the operating system will swap to the hard disk. Since hard drive access is much slower than memory access, the simulation will slow significantly once swapping begins.
The amount of memory required to run a simulation must be less than the total amount of memory on the system minus the memory required for other applications. The best results will be obtained when all other applications such as web browsers and email are closed. The actual amount of memory required to run a simulation can most easily be determined by running the simulation and then opening either the Task Manager (Windows) or running the Top command (Linux).
On Windows, select the Processes tab on Task Manager. This will show the processes currently running and their memory requirements. Select , Select Columns, and check the checkbox next to Memory – Commit Size to display this information. Commit Size represents the maximum amount of memory an executable will require. The actual amount of memory actually required may be slightly less but Commit Size represents the upper bound. Figure 1 shows the processes currently running and the amount of Commit Memory for each.
To determine the total amount of memory available on the system, select the Performance tab on the Task Manager. The total available memory will be displayed under Physical Memory (MB), Total, as shown in Figure 2.
The Commit Size must be less than the Total amount of memory available. How much less it needs to be depends on the number of other processes running on the system. It is quite difficult to know how much memory is actually available to run a simulation. The best indicator of whether a simulation is running in memory and not swapping is the Resource Monitor available on Windows 7. Figure 3 shows the Resource Monitor while a very large simulation is running. The physical memory is 100% utilized and the Hard Faults (page faults) are very high. This would be a clear indication that the simulation is too large for the available memory. The alternatives are to either reduce the size of the computational mesh or to add more memory to the computer.
On Linux, the Top command provides similar information as the Task Manager on Windows. Again, the number of page faults is the primary indicator of whether or not a simulation is running in memory or if it is swapping to the hard drive. To display the page fault count for each process running under Linux, launch Top by typing "top" from a command prompt, and then hit the "f" key and then type "u" <enter>. This will add a column called "nFLT" to the Top display. nFLT indicates the number of page faults which have occurred for a particular executable. The number of page faults should be fairly small (<100) and should not continue to increase as the simulation runs. If nFLT continues to increase while the simulation is running, paging is taking place and the simulation will run slowly.
In general, the 3.5M cells/2GB of memory guideline should be used to estimate the amount of required memory. However, this is only a rough guideline. The best way to ultimately determine if a simulation fits in memory is to run the simulation and watch for page faults.