FLOW-3D/MP: High Performance Computing for CFD Applications
FLOW-3D/MP version 5.0 enables engineers to take advantage of the scaling potential of the software on multi-core 64-bit clusters. FLOW-3D users with access to Linux HPC clusters can obtain simulation results for their transient free-surface problems by using FLOW-3D/MP. Version 5.0 is a hybrid parallel version of FLOW-3D v10.1 and uses Message Passing Interface (MPI) and OpenMP to achieve parallelization. The OpenMP paradigm is based on shared memory architecture (fine-grained or loop-level parallelism) while MPI is based on a distributed memory, domain decomposition paradigm (coarse-grained). The physical domain is decomposed into blocks or MPI domains, which are then associated with MPI processes and assigned to nodes on a cluster. Within nodes, the MPI processes spawn threads and perform multi-threaded calculations using OpenMP directives. The physical models and numerical methods of FLOW-3D/MP v5.0 are based on FLOW-3D v10.1.
Highlights of FLOW-3D/MP v5.0
- Developed and optimized for the latest multi-core architecture
- Physical models fully synced with FLOW-3D v10.1
- Parallelization based on the Hybrid MPI-OpenMP methodology
- Automatic Decomposition Tool allows directional decomposition
- True Domain Decomposition
- Full compatibility with FLOW-3D v10.1 GUI for post-processing results
Performance Improvements & Benchmark Data
FLOW-3D/MP v5.0 offers users substantial performance improvements with possible scaling up to 128 cores and beyond, over FLOW-3D v10.1. Go to the FLOW-3D/MP benchmarks page for a performance analysis of FLOW-3D/MP up to 128 cores for 5 typical applications namely; hydraulics, casting, thermal stress evolution, microfluidics and aerospace.
Contact us to request more information about FLOW-3D/MP v5.0.
High performance clusters offer significant performance gains over typical desktop computers but can be quite complex to configure. There are many choices for operating systems, middleware, interconnects, memory, and storage. Intel Corporation has introduced the Intel Cluster Ready certification program to assure users that hardware and software will work correctly together. Hardware that displays the Intel Cluster Ready logo is assured to run software which is also certified as Cluster Ready.
HPC Advisory Council
Flow Science is a member of the HPC Advisory Council. The HPC Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) use and its potential, bring the beneficial capabilities of HPC to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC system products.
FLOW-3D/MP runs on both Workstations and clusters (which may consist of a group of similarly configured Xeons® or Opterons®) running Redhat Enterprise Linux (5 or 6) or SUSE Linux 11, a network interconnect such as Gigabit Ethernet or Infiniband, and a large shared NFS disk accessible from all nodes in cluster. FLOW-3D/MP supports Intel® MPI, which is provided as a part of the installation. For hardware recommendations, please contact our sales team.