Unstructured Memory Allocation: Ability to allocate solution arrays on parts of mesh
This article highlights developments to be released in FLOW-3D Version 9.2.
The convenience and simplicity of conventional structured finite-differences meshes carry an artifact with them, that is, that all solution variables are stored in every cell. When fluid is not present in large parts of the mesh, it results in a waste of memory and CPU use. To alleviate this problem we developed the Unstructured Memory Allocation (UMA) approach, where solution arrays are only allocated on parts of the entire mesh. This development will be included in the release of FLOW-3D Version 9.2.
- Users can define irregular computational domains without using multiple mesh blocks, i.e., sections of the mesh can be removed
- Cells are divided into active and passive categories
- Combined with the multi-block meshing, enhances meshing flexibility and accuracy of the solution
- Significantly reduces memory use and increases speed in certain types of problems, like high pressure die casting
The loop on the left is for the conventional structured finite difference mesh. It is a triple DO loop that cycles through all cells in the x, y and z coordinate directions. Additional logic inside the loop skips empty cells and fully blocked cells. If fluid is only present in a small sub-region of the entire mesh, there is a lot of waste in the size of the DO loop, the size of the solution arrays (shown here as ARRAY(ijk)) and the operations inside the loop.
In the UMA approach, the DO loop changes to the one shown on the right. The loop is now a single-index loop through only the cells of interest (active cells). The ARRAY variable is allocated only to cover the sub-region of the mesh, highlighted by the red oval in the image on the right.
The deactivated (passive) cells are replaced with the adiabatic wall-type boundary condition. Cells fully blocked by stationary components are automatically deactivated if there is no thermal conduction within solids and the electric field model is turned off. In this case, no solution data need to be computed and stored in these cells.
This example models a thin stream of fluid on a sloped surface. On the left, four mesh blocks are used to fit the grid to the flow domain and to minimize the number of cells. A single block is used on the right, with cells deactivated both in the blocked region and in the empty space above the fluid (highlighted red). In both cases, the number of active cells is similar, but the use of the single mesh block gives better convergence, accuracy and speed.
For sparse domains like this thin cylindrical cavity, the Unstructured Memory Allocation method can dramatically increase the efficiency of the calculations, both in terms of memory use and CPU time. The red shaded area shows the deactivated section of the mesh.
The user will also be able to deactivate any part of the mesh manually by defining Domain Removing geometry components. Cells that are blocked by these components are deactivated and removed from memory.
The new method was tested with respect to its performance in comparison to version 9.1.1. The plot below shows the speedup (or slowdown) as a function of the number of passive cells.
Several types of problems were tested: A gravity casting filling problem (blue line), flow over a weir (light blue line), and an axisymmetric viscous dripping problem with surface tension (red line).
The yellow triangle corresponds to a large high pressure die casting filling simulation. The speedup is clear for cases when more than half of the cells are inactive. However, if most of the cells are active we can see a 10-20% slowdown. This is because the UMA method creates some computational overhead associated with the additional work required to find neighbors of any given cell.
Unstructured/structured speed comparison. Code speedup
compared to version 9.1.1 The speedup of the new code is problem-dependent. Casting-type problems requiring very large grids to resolve small or thin regions benefit most from the new approach. The use of unstructured mesh carries an overhead that results in 10-20% slow-down for simulations that use the full mesh.
As represented in the two graphs: blue line; red line; light blue line
Unstructured/structured speed comparison
Code speedup compared to version 9.1.1 This plot shows the same curves as on the previous plot, but zooms at the cases where the UMA method shows a slowdown in the calculations compared to the standard method. In the worst case, the slowdown is about 20%. The breakeven point is at about 10-35% of passive cells, depending on the type of the problem.
In the Unstructured Memory Allocation approach the main arrays are allocated on only parts of the mesh where flow equations are being solved. It adds the ability to define irregular computational domains without the need to use multiple mesh blocks, thus simplifying the generation of the mesh and increasing solution accuracy. Sections of the mesh blocked by geometry are deactivated automatically. The ability to manually deactivate parts of the mesh is also provided. Deactivated (passive) cells are replaced with adiabatic wall-type boundary conditions.