TransAT© Single- & Multiphase Flow: Architecture & Structure
The CFD/CMFD code TransAT© developed at ASCOMP Switzerland is a multiscale, multi-physics, finite-volume code based on solving multi-fluid Navier-Stokes equations. The code uses structured meshes, though allowing for multiple blocks to be set together. MPI parallel based algorithm is used in connection with multi-blocking. The grid arrangement is collocated and can thus handle more easily curvilinear skewed grids. The solver is pressure based (Projection Type), corrected using the Karki-Patankar technique for low-Mach number compressible flows, up to Mach = 2.5. High-order time marching and convection schemes can be employed; up to third order Monotone schemes in space. The mesh generation can be achieved either using traditional Boundary Fitted Coordinates (BFC), with help of external meshing tools like GRIDGEN, or using the IST/BMR methods described next, for which the TransAT Suite has a specific grid generator: TransAT-MESH. Multiphase flows are tackled using interface tracking techniques for both laminar and turbulent flows, where the flow is supposed to evolve as one fluid having variable material properties, which vary according to the colour function. Specifically, the Level-Set approach, the phase-field variant and the Volume-of-Fluid methods can be employed. Static or dynamic angle treatment is also possible. Turbulent multiphase flows can be tackled within both the RANS and LES (and V-LES) frameworks, with specific interfacial treatments. This technique is very specific to TransAT, and is known as LEIS, short for Large Eddy & Interface Simulation. The main elements of TransAT architecture and structure are listed in the Tables below.
The Immersed Surfaces Technique (IST)
Immersed boundary (IB) techniques have been developed in the late 90s for flows interacting with solid boundary under various formulations (Mittal¬ & Iaccarino, 2005). The mostly known formulation for instance employs a mixture of Eulerian and Lagrangian variables, where the solid boundary is represented by discrete Lagrangian markers embedding in and exerting forces to the Eulerian fluid domain. The interactions between the markers and the fluid variables are linked by a simple discretized delta function. IB methods are all based on direct momentum forcing (penalty approach) on the Eulerian grid. The forcing should be performed such as it ensures the satisfaction of the no-slip boundary condition on the immersed boundary in the intermediate time step. This procedure thus involves solving a banded linear system of equations whose unknowns consist of the boundary forces on the Lagrangian markers; thus, the order of the unknowns is 1D lower than the fluid variables.
The Immersed Surfaces Technology (IST) has been developed by ASCOMP GmbH (to the best of our knowledge), although other similar approaches have been developed in parallel. The underpinning idea is inspired from Interface Tracking techniques for two-phase flows (Level Sets), where free surfaces are described by a hyperbolic convection equation advecting the phase colour function. In the IST the solid is described as the second ‘phase’, with its own thermo-mechanical properties. The technique differs substantially from the Immersed Boundaries method of Peskin, in that the jump condition at the solid surface is implicitly accounted for, not via direct momentum forcing (using the penalty approach). It has the major advantage to solve conjugate heat transfer problems, in that conduction inside the body is directly linked to external fluid convection. The solid is first immersed into a cubical grid covered by a Cartesian mesh. The solid is defined by its external boundaries using the solid level set function. Like in fluid-fluid flows, this function represents a distance to the wall surface; is zero at the surface, negative in the fluid and positive in the solid. The treatment of viscous shear at the solid surfaces is handled very much the same way as in all CFD codes, where the wall normal vector needed to estimate the wall shear ( ) is obtained using.
Block-Mesh Refinement (BMR)
The BMR technique was developed in the TransAT code to help better solve the boundary layer zone when use is made of the IST technique discussed above. In BMR, more refined sub-blocks are automatically generated around solid surfaces; with dimensions made dependent on the Reynolds number (based on the boundary layer thickness). An unlimited number of sub-blocks of various refinements can be generated, with connectivity between the blocks matching up to 1-to-8 cells. This method can save up to 75% grid cells in 3D, since it prevents clustering grids where unnecessary.
The combined IST/BMR technique has various major advantages over traditional methods; these are enumerated below (including ongoing developments):
- Rapid gridding of complex geometries & set-up of flow boundary conditions
- Suitable for moving bodies, fluid-structure interactions
- Suitable for conjugate heat transfer
- Retain high-order scheme accuracy (Cartesian grids)
- No need to change discretization
- Local Defect Correction (LDC) – fully conservative
- Automated refinement per block: where needed
- Scalable parallelization; save up to 70% cells in 3D
The Adaptive Mesh Refinement Technique (AMR)
Adaptive mesh refinement (AMR) as implemented in TransAT is used to refine grids dynamically around evolving fluid-fluid interfaces. The method is based on discretizing the continuous domain of interest into a grid of many individual elements of smaller size, automatically and dynamically clustered around moving interfaces. The general advantages of dynamic gridding schemes are: increased computational and storage savings, and complete control of grid resolution. In TransAT, AMR for interface tracking returns a better resolution for interface topology, for curvature determination, for contact angle treatment, and for mass conservation within the level set approach. The technique is available in 2D only; the full 3D version is expected for 2011.
The Sharp Solid Interface Treatment (SSIT)
The IST algorithm described in Section 1.2 may have drawback in some case, in that objects have to be resolved on the grid, using a minimum of 2-3 grid cells completely inside the solid to fully block the flow. If there are less cells inside the solid, the flow feels the presence of the solid only partially. To resolve this issue, the code TransAT has been recently upgraded using the so-called sharp solid interface formulation, which modifies the original IST algorithm by decoupling the equations at surfaces based on surface normal’s that are dependent on the solid level set function (gradients). With this, even an infinitely thin solid perceives the full blocking effect on the oncoming flow. The mathematical background of the approach is beyond the scope of this document; below we present a couple of illustrative examples.
The next practical example involves the flow past a ventilation issue, containing very thin plates. Results are shown in Fig. 3 below. The formulation is capable to treat such complicated situations, for which traditional grid generation technique will fail.
TransAT has the unique feature to run either in explicit or implicit, depending on the test problems considered, for both single and multiphase flows, depending on the turbulence models employed. In implicit solutions, the user can resort to steady state or transient time marching schemes; a pseudo-transient approach exists as well, used in particular for pollutant dispersion. TransAT uses high order schemes for convection and diffusion processes in the discretized transport equations.
Briefly, the spatial schemes employed for convection in all equations are second and third-order accurate. Time marching uses 1st to 2nd order schemes in the implicit solver and up to 5th order (Runge-Kutta) in the explicit version of the code.
TransAT software is parallelized using two approaches: TransAT-SB uses the OpenMP Protocol on shared-memory machines, including PC’s and laptops, whereas TransAT-MB uses MPI and domain decomposition methods on non-shared memory supercomputer clusters. The code uses structured multi-block meshes, with the grids having two layers of ghost cells where information from neighboring blocks is received. An MPI (Message Passing Interface) parallel based algorithm is used in connection with multi-blocking. The code can run explicit as well as implicit; both variants have been tested for scaling, for single and multiphase flow. The first scaling test case is a single-phase turbulent flow in a Cartesian grid. The second scaling test case is a multiphase turbulent flow using the IST approach discussed previously (which is also a Cartesian mesh).
Single Phase Flow
The test problem selected for this MPI scaling exercise is the turbulent flow in a channel heated from both lower and upper sides at Reτ=200, see Fig. 4. The problem was simulated using LES (explicit scheme); the detailed setup is described by Lakehal et al. (2010). The problem was simulated in 3D on the ORNL supercomputer Jaguar, using various grids and memory allocation as described in Tables 1 & 2.
As is shown in Table 1, the case was running well up to 960 processors, with an allocation of 10,950 cells per processor, showing a perfect scaling and 100-138% efficiency. Because MPI-messages become smaller as the number of processors increases the MPI parameter (MPI_MAX_SHORT_MSG_SIZE) on the ORNL Jaguar cluster for small messages had to be changed, which slowed down the simulation. Reducing the MPI parameter to 4,000 decreases the efficiency from 104% to 84%; the numbers in Table 1 are colored in red. The reason for > 100% efficiency is because the work per time step does not remain constant if the blocking is changing. In summary, the efficiency is very good until 1,920 processors or 5,475 cells per Proc. (Fig. 5). The conclusion to be drawn here is not ‘TransAT scales up to 1,920 processors’, but rather ‘TransAT-MB using explicit schemes has the potential to outperform in terms of parallel scaling for very large grids & loading’. This result could be different for larger grids with maximum loading.
The test selected for this scaling exercise is the turbulent stratified flow in a pipe of diameter 0.5m and of length 5m (Lakehal et al., 2011). The grid is IST type; LEIS (Lakehal, 2010) was employed, where LES is coupled with level set to track the interface. The pipe contains water at a water holdup of hL/D = 0.14, injected at a velocity of 0.2m/s. Air is injected at a bulk velocity of 20m/s (ReG=1.6×105). Flow stratification and surface entrainment is shown in Fig. 6.
The case was simulated on ORNL Jaguar, using various grids and memory allocation as described in Tables 2 & 3. In contrast to previous cases, here the implicit variant of the code has been employed. The single-phase flow variant of the problem was simulated as well. Important to note is that the new OS of Jaguar no longer requires adjusting the MPI parameter; the algorithms in TransAT have been modified to cope with this change.
The tables indicate that both single and multiphase flow simulations scale very well, up to 1’024 processors, for an allocation of 13’824 cells per processor. Scaling efficiency reached is more than 100%, for the multiphase flow. Increasing to 2’048 processors with a lower loading per cell (6’912) decreases the efficiency to 70% and 60%, respectively for the two flows. The reason for lower efficiency than in the explicit case is because there is more communication needed to solve a linear system per equation. Multiphase case scales better than the single-phase because more work is performed per time step (level set equation advection and re-distancing, material properties updates, etc.).
Interface Tracking Schemes
In the VOF context, TransAT solves for the evolution of the liquid volume-fraction, identifying flow regions containing pure liquid from pure gas flow regions. The VOF method does not amount solely to the solution of this evolution equation, it requires accurate algorithms for advecting the volume fraction function so as to preserve conservation of mass. Since this cannot be achieved by means of conventional finite-difference schemes because of numerical diffusion, the VOF field are first advected, after which the interface location is geometrically reconstructed using these rough data. The 2D & 3D VOF technique implemented in TransAT features two variants of unsplit advection, together with the multidimensional flux calculation using cell-face velocities assigned to cell-vertices (Piecewise-Constant Flux Surface Calculation, or PCFSC). The evolution of the interface and advection of the phase-indicator function is accomplished by reconstructing the interface within each computational cell and computing the volume flux that occurs from each cell to its immediate neighbours under the prevailing flow. Use is made of the CVTNA high-order scheme of Liovic et al. (2006, Comp. & Fluids) is used for interface reconstruction. VOF is generally used for small-scale topology fragmentation, like sprays; but it remains expensive in terms of CPU.
The level set approach consists in solving a hyperbolic equation to track the interface on a fixed Eulerian grid, using a smooth signed-distance function referring to the shortest distance to the front. Negative values correspond to one of the fluids and positive values to the other. The exact location of the interface corresponds to the zero level. Material properties like density, viscosity and thermal conductivity are updated in time using this distance function. The advantage of the method is that it dispenses with interface reconstruction employed in VOF, it can handle merging and fragmentation and it permits identification of the exact location of the interface, which helps treat interfacial physics (e.g. interfacial turbulence decay, interphase mass exchange). Our Level Set method works on both Cartesian and skewed curvilinear grids (BFC). It is accurate in that it uses 3rd to 5th order WENO scheme for space update and 3rd and 4th order Runge-Kutta for time marching. It uses various re-distancing schemes, including fast marching for BFC grids, conserving thus mass up to 0.1% mass loss. For practical problems, use is made of the global conservation scheme of Lakehal et al. (2002, IJHFF).
The Phase Field method is applicable to solving microscale flows involving two-phase complex fluid flows. An order parameter exists that provides a measure of the two phases. The order parameter can be the relative mass density or the concentration difference, or even a colour function that distinguishes the phases. The phases are separated by an interfacial region, called the diffuse interface thickness, where the order parameter changes rapidly, but smoothly. In the present formulation, the order parameter is an independent thermodynamic variable on which the free energy of the system is based. The Phase Field method solves the convective Cahn-Hilliard equation and the Navier-Stokes equation is modified by the addition of a continuum forcing term (stresses, sometimes called Kortoweg stresses), which replaces the sharp-interface surface tension term by a diffuse interface. On comparison with the Level-Set method, Phase Field requires a larger number of computational grid points in order to capture the diffuse interface. These factors make the method slower than the level-set method. The presence of the diffuse interface means that the spurious currents which develop in the surface tension formulation are virtually non-existent. These parasitic currents can destroy the solution in the sharp-interface limit in low-velocity cases where they drive the flow instead of the actual fluid velocity.
Advanced Turbulence Schemes/Approaches
The limits of the RANS approach for both single phase and multi-phase flow (in particular) have been reached. The fierce competition between industrial key players in automotive and land-based and aerospace technologies in particular is promoting the development of new modelling strategies that put flow unsteadiness at the forefront of their predictive capabilities. This is the way ASCOMP has decided to go: to build an added-value modelling strategy based on LES, in complement of RANS. This observation applies in particular to industries in which reactive and multi-phase flow control the design. Many other industries are in search for similar new developments, too, namely oil & gas, thermal hydraulics of nuclear reactors and chemical and process engineering. This added to the constant growth of computer resources in terms of speed and storage helps replace the LES concept at the front stage of applied CFD. The LES strategy built in TransAT is very stable and highly accurate. It follows the guidelines of top literature, be it in terms of wall modelling, grid refinement, unsteady inflow boundary conditions (using synthetic models), numerical schemes. Our LES works for both explicit (CFL ~0.1) and implicit schemes (CFL up to 1).
The Very Large-Eddy Simulation approach represents an excellent compromise between efficiency and precision as to capturing unsteady turbulence, and may thus be used for industrial problems for which LES remains computationally expensive. This is particularly true of high-Reynolds number flow conditions. It can also be used for gas-liquid two-phase flows such as pressurized thermal shocks. The method is a sort of blend between U-RANS and LES, in that it resolves very large structures – way larger than the grid size – and models all subscale of turbulence using a two-equation model, by reference to RANS. This model or rather modelling approach is proved to provide the flow unsteadiness in three-dimensions, while saving computational cost compared to LES. The method is computationally efficient (it can be applied using an implicit solver which permits a higher CFL than with LES; 0.1 vs. 1), and numerically robust. The computational cost decreases with increasing filter width, which in contrast to LES is not dependent on the grid size, but based on the characteristics length of the flow.
In TransAT we have borrowed the LES strategy from single phase flows and coupled it with both VOF and Level Sets (presented above) for small-to-mild Reynolds number gas-liquid flows. The combination (referred to as LEIS; short for Large Eddy & Interface Simulation; see Lakehal, 2010, Nucl. Eng. Design) could lead to the simulation of the large-scale physics of interfacial flows down to the grid-resolved level. The idea consists of grid-filtering each phase separately; the resulting sub-grid scale (SGS) stresses are modelled as if they were isolated. Special treatment is necessary at the interface though, taking advantage of the fact that the lighter phase perceives the interfaces like deformable walls. The combination of the two approaches brings a notable difference, that is: besides delivering time-dependent interfacial kinematics (and provided the method could achieve sufficient resolution for the boundary layers at the interfaces), the need to model the interfacial exchange terms in the two-fluid phase conservation equations is eliminated. For the time being this is limited to interfacial flows with moderate topology fragmentation. A rigorous near-interface treatment is included, taking into account the interfacial shear on both sides of the interface (Liovic & Lakehal, 2007, JCP).