After the great success of the first COMMODORE Workshop in Paris (Website; Meeting report) in Paris, the second workshop will be organised as the Hamburg COMMODORE Conference during 28-31 January 2020. This Hamburg Conference will be co-organised by the Collaborative Research Centre TRR 181 "Energy Transfers in Atmosphere and Ocean", the Leibniz Institute of Baltic Sea Research Warnemünde and Inria.
The focus of the COMMODORE workshop is on numerical solution techniques of the partial differential equations that govern the ocean circulation from coastal to large scales. COMMODORE aims to share experience on numerical model development, perspectives on future model developments, and common test-cases to evaluate numerical models.
Specific topics are
and the consistent coupling between numerics, parameterisations for unresolved processes and the representation of specific processes.
The workshop aims to establish a more systematic bridge between methods used and developed by the applied math community and the ones used in realistic oceanic models.
Contributions from the atmospheric research community are welcome to foster more systematic communication between oceanic and atmospheric numerical modelers.
Contributions to the consistent coupling of hydrodynamics with biogeochemistry and other Earth system components are also welcome.
For this workshop we particularly encourage contributions on topics related to the ongoing collaborative research center TRR 181 "Energy transfers in Atmosphere and Ocean" funded by the German Research Foundation.
Sponsored by
We will present the evaluation of the second version of the unstructured-mesh Finite-volumE Sea ice–Ocean circulation Model (FESOM2.0), with respect to the sensitivity to arbitrary Lagrangian Eulerian (ALE) linear and nonlinear free surface formulation. Further, the hydrographic biases, large scale circulation, numerical performance and scalability of FESOM2.0 are compared with its predecessor version FESOM1.4. Compared to its predecessor, FESOM2.0 shows hydroghraphic biases with a similar magnitude but with a more realistic representation of the Atlantic Meridional Overturning Circulation (AMOC). FESOM2.0 provides clearly defined fluxes and a three times higher throughput in terms of simulated years per day (SYPD) than its predecessor, which makes FESOM2.0 to the first mature global unstructured-mesh ocean model with computational efficiency comparable to state-of-the-art structured-mesh ocean models.
Mixing across density surfaces is an essential part of the thermohaline circulation and can control the ocean circulation, the global heat budget or the distribution of nutrients in the ocean. Recently the vertical mixing library CVMIX and the novel energy consistent vertical Turbulent-Kinetic-Energy (TKE) mixing parameterisation in combination with the Internal Wave Dissipation, Energy and Mixing (IDEMIX) got implemented into FESOM2.0. We will show first results regarding the hydrographic biases of the different vertical mixing parameterisations within FESOM2.0.
This talk describes new theoretical and algorithmic developments of the ocean general circulation model ICON-O of the Max-Planck Institut for Meteorology and reports on the application of the model in then regime of ultra high-resolution simulations.
Many regions of the world ocean require very high horizontal resolution to simulate mesoscale eddy activity due to relatively small Rossby radius. Recently developed large-scale ocean circulation models, formulated on unstructured meshes, such as FESOM2, make it possible to utilize more flexible meshes with variable resolution. By increasing resolution only in energetically active regions, this allows resolving ocean eddies in global setups at a small computational cost. We will report on recent developments of the Finite-volumE Sea ice-Ocean circulation Model, Version 2.0 (FESOM2) and discuss its performance in the global coupled and stand-alone eddy-resolving simulations and regional simulations down to 1 km scale. We will also present the analysis of the model performance with a focus on bottlenecks in parallel scalability.
The Energy Exascale Earth System Model (E3SM), developed by the US Department of Energy, was first released in 2018, and has completed a suite of DECK/CMIP6 simulations and subsequent publications. The components of E3SM are all variable resolution and include the Model for Prediction Across Scales (MPAS) ocean and sea ice, which are based on unstructured horizontal meshes using Voronoi tessellations. E3SM-version 1 performs well overall with biases typical of other CMIP‐class models, although the simulated Atlantic Meridional Overturning Circulation is weaker than many CMIP‐class models, and its equilibrium climate sensitivity is high at 5.3K. Here we provide an overview of E3SM progress, results from new regionally-refined simulations, and details of our algorithms and continued development of MPAS-Ocean.
Many geophysical phenomena, in particular in the ocean, exhibit a large range of spatial and temporal scales interacting with each other. Often the scale range of several orders of magnitude reaches far beyond resolution capacities of numerical algorithms and demands for approximations or parameterizations to account for the influence of processes outside of the computational resolution.
In this presentation we will focus on spatial scale gaps and numerical methods to bridge those. In order to downscale the influence of large-scale processes on small-scale features, adaptive mesh refinement (AMR) has proven efficient and successful. Its application, however, is still not widely accepted due to several constraints. We will describe a few and demonstrate possible solutions. The representation of small-scale influences in large-scale processes can be described as upscaling. Usually, this is performed by appropriate parameterizations, using somewhat averaged modifications to the governing equations to account for the small-scale influence. An alternative method that is capable of preserving subgrid-scale structure is provided by modern numerical multiscale Galerkin methods. We will present such a method that is tailored to situations with dominant transport terms.
The combination of AMR and multiscale numerical methods may allow for mathematically and physically consistent discretization methods for scale interactions.
Baroclinic instability drives mesoscale eddy formation. We perform a von Neumann analysis of several B and C grid discretizations in the Eady problem to examine the impact of horizontal spatial discretization on the resulting growth rates. Growth rates of symmetric instability are typically too small and converge upwards, and at fixed resolution the C grid is more accurate. Baroclinic instability on the B grid has a growth rate that is too low at eddy-permitting resolution, and converges upwards towards the true value as the grid is refined. On the C grid there is a spurious baroclinic instability at small scales. At large Richardson numbers this spurious instability is peaked near the grid scale, and does not disappear as the grid is refined. We are unable to completely eliminate this spurious instability, but recommend a combination of fourth-order tracer advection and biharmonic viscosity to effectively damp it out.
Due to their key role in the global distribution of (physical) diapycnal
mixing and mass transport, proper representation of internal wave dynamics
in numerical models should be considered a priority since global climate
models are now conﬁgured with increasingly higher horizontal/vertical resolution.
The important terms involved in the discrete representation of gravity waves propagation are :
- The horizontal/vertical grid staggering
- The discrete approximation of the divergence term in the continuity equation and of the horizontal pressure gradient,
- The discrete approximation of the hydrostatic equilibrium
- The amount of vertical/horizontal diffusion
Based on a discrete normal mode decomposition, we study the impact of those
different ingredients on the phase speed of internal waves.
The NEMO ocean model is currently based on the Leapfrog scheme that provides a good combination between simplicity and efficiency for low-resolution global simulations. However, this scheme is subject to difficulties that question its relevance at high-resolution : the necessary damping of its computational mode, e.g. via a Robert-Asselin filter, affect stability and increases amplitude and phase errors of the physical mode ; because it is unconditionally unstable for diffusive processes, monotonicity or positive-definiteness comes at a substantial cost and complication.
The evolution toward a 2-level time stepping for the 3-dimensional baroclinic mode based on a Runge-Kutta scheme is studied. The scheme is designed to aim for a good compromise between accuracy and stability jointly for advection, internal gravity-waves propagation and treatment of Coriolis. The analysis is first conducted at the level of the Shallow-Water Equations. Both space and time dimensions are discretized, to allow for a more accurate estimate of amplitude and phase errors. The analysis is then extended to the Primitive Equations, handled with a mode-splitting technique between the 3-dimensional baroclinic and the 2-dimensional barotropic modes. Idealized test-cases illustrate the benefits associated to the revised time-stepping compared to the original Leapfrog.
My research involves improving the barotropic-baroclinic splitting of the time-stepping algorithm of the Model for Prediction Across Scales — Ocean (MPAS-Ocean) with a view to improving the numerical stability and solution accuracy while reducing the computational time. More specifically, I have been studying (a) different filters for time-averaging the intermediate instantaneous barotropic modes and including the ‘mean’ solution in the time derivative for the next baroclinic (large) time step, and (b) variations of the forward-backward time-stepping algorithm for advancing these barotropic modes. The primary purpose of the time-averaging filters in (a) is to minimize the aliasing and mode-splitting errors while ensuring the stability of the time-stepping scheme. The forward-backward algorithm in (b) consists of a predictor and an optional corrector stage with a set of weighting parameters, an optimum combination of which can enhance solution accuracy. I have programmed a one-dimensional shallow water equation solver in object-oriented Python for simulating the propagation of a surface gravity wave, where I have tested a number of filters and time-stepping schemes. In this talk, I will compare the efficiency and accuracy of various designs in the simplified code and global MPAS-Ocean simulations.
Moderator: Florian Lemarié
The changes in the ocean circulation due to the changing isopycnal diffusivities $K$ in a global non-eddy-resolving model configuration is investigated, which is implemented with PyOM in a horizontal resolution of $\sim$ $2 ^{\circ}\times2 ^{\circ}$. Model setups with different values of constant $K$ are used, while all other aspects of the model configuration are kept identical. Although the direct effect on density and thus the circulation is small by varying $K$, the model shows a surprisingly large sensitivity to the changes: with larger $K$, the oceanic interior tends to become cooler and fresher, resulting in a weakening of the deep water formation, and the strength of the overturning circulation weakens, as seen with the volume transport associated with the North Atlantic Ocean, Southern Ocean as well as the Antarctic Circumpolar Current. We find the following explanation: as the gradient of salinity is large on isopycnals; it leads to freshening in the deeper ocean and decrease in the horizontal gradient across the Antarctic Circumpolar Current. The results point out that the effect of forcing change on the on the ocean circulation could be as large as the effect of isopycnal diffusivity changes. Therefore, the choice of isopycnal diffusivity in the model can significantly impact the deep water formation and residual overturning circulation.
We present a new forecast model for rotating shallow water equations in spherical geometry. The model is a horizontal component of numerical weather prediction system currently under development in theoretical meteorology group at the University of Hamburg. In contrast to the majority of spectral schemes, which employ spherical harmonics, our model uses Hough functions as the expansion basis. This represents the motion as a system of interacting waves that have a clear geophysical interpretation.
In the talk we discuss advantages and disadvantages of the model, present the convergence rates on analytical solutions, and model's performance on geophysically important complex flows.
Consider the motion of a rotating fluid governed by the Boussinesq equations with full Coriolis parameter. This is contrary to the so-called ''traditional approximation'' in which the horizontal part of the Coriolis parameter is zero. The model is obtained using variational principle which depends on Lagrangian dynamics. The full Coriolis force is used since the horizontal component of the angular velocity has a crucial role in that it introduces a dependence on the direction of the geostrophic flow in the horizontal geostrophical plane. We aim that singularity near the equatorial region can be solved with this assumption. This gives a consistent balance relation for any latitude on the Earth. We follow the similar strategy to that Oliver and Vasylkevych (2016) for the system to derive the Euler-Poincar\'{e} equations. Firstly, the system is transformed into desired scale giving the differences with the other scales. We derive the balance model Lagrangian as called L1 model, R. Salmon, using Hamiltonian principles. Near identity transformation is applied to simplify the Hamiltonian. Whole calculations are done considering the smallness assumption of the Rossby number. Balance relation is important to study on gravity waves which are used to describe the interaction between ocean and atmosphere. Long term, we aim that results help to understand the global energy cycle with the goal of validity and improving climate models.
The quantification of estuarine exchange flows in terms of bulk values for volume transport and salinity has been done for over a century. Yet, just in recent years scientists were able to relate these bulk values to the mixing inside the estuary. For a long-term averaged estuary, the simplest approximation states that the integrated mixing inside the estuary is the product of inflowing salinity, outflowing salinity and river discharge.
The computation of the bulk values is done in an isohaline framework, which for estuaries is suitable as the density is approximately only controlled by the salinity. For estuaries where the contribution of temperature to the density is not negligible, an isothermal framework should be included.
In this study we do exactly this and we extend the isohaline framework to a thermohaline framework which includes potential temperature. With this framework we can decompose the exchange flow into a T-S diagram and compute robust bulk values of the exchange flow. The additional information about pot. temperature enhances the information about water masses of the exchange flow.
To show the strength of this method, we elaborate on the exchange flow of the Persian Gulf, a large inverse estuary with a distinct seasonal cycle. This seasonal cycle is reflected in the properties of the exchange flow which is discussed with focus on the composition of the outflowing water.
Generally speaking, test cases offer a simplified framework for understanding of numerical ingredients of an OGCM. However, even simple test cases can present difficulties. As a very newcomer to the ocean modelling community, I compared NEMO (GFD) and FreshKiss2d (CFD) in the lock exchange experiment framework against experimental results using Benjamin 1968, Ilicak 2014 and Adduce 2012.
NEMO and FreshKiss2d obviously present differences as both software target distinct objectives (implicit regularization at centimetric resolutions). Then, how to quantify these differences ? What is the proper metric to quantify the numerical method quality ? Can we set a validity range using analytical solution and experimental results ? The front speed is widely used but it is not relevant for this goal and computing the potential energy in an experimental context is not straight forward. Another problem is that I did hydrostatic simulations while laboratory experiment are non-hydrostatic. Is it sufficient to play with parameterization to ensure results consistency ? How to transfer results obtained in a laboratory tank to NEMO’s applications ? Eventually, the lock exchange experiment non linearities bring out the difficulty to quantify tracer spurious mixing. Indeed, even from a theoretical point of view, without any source of dissipation, it is not clear whether the interface between light and dense fluids should be continuous or not. Then, how to evaluate the effect of the momentum irregularities on the mixing in a numerical context ?
As a conclusion, my experience with the lock exchange test case has risen more unanswered questions than provided results. Still, as the organisers call for sharing experience about test cases, it may be relevant to open such a discussion.
Meso-to submesoscale processes are not properly resolved in current climate models and need to be parameterized. Understanding of the characteristics of the turbulent regimes can be gained from Lagrangian particle statistics. We study the relative dispersion of surface drifters in the California upwelling system from observed drifter trajectories and a high resolution ocean model.
From the data sets, the non-local regime with the unfolding time $\tau$, the local regime with the energy transfer rate $\epsilon$ and the diffusive regime with the diffusivity $\kappa$ are estimated. The comparison shows that the diffusivity $\kappa$ of the surface drifters is by one order of magnitude larger than from the numerical simulation.
The Spanish Institute of Oceanography (Instituto Español de Oceanografía, IEO) has been performing sustained observations of physical (temperature, salinity and currents) and biochemical variables (nutrients, oxygen and plankton) along the N and NW Atlantic Iberian coast since the late 80s. In this contribution, we show the advances in the application of a realistic high resolution configuration of the ROMS model coupled to a Fasham-type biogeochemical model (N2PZD2). We will show how different choices of model configuration (bathymetry smoothing, horizontal advection schemes, vertical turbulence closure, resolution, open boundary forcing…) affect model performance in comparison to observations. We will concentrate on the seasonal and interannual variability of freshwater plumes, upwelling and shelf and slope currents. Additionally, we will show how the use of a numerical model combined with sustained observations of physics and biogeochemistry from the coastal observatory allow us study variability of plankton productivity and also the effect of circulation on early life stages of fish at different temporal and spatial scales in the area, with special emphasis on the spring transition.
Forecasting of the transport of the anthropogenic pollution at the sea surface is rather important problem. The major microplastics (MPs) sources for the open sea are considered to be cities and river run-off. The bays of large port cities are usually suffer from significant anthropogenic load. However, the features of the transport of polymer particles in the urban coastal zone have been studied insufficiently due to the large number of the physical processes that act there. The complex form along with the presence of shore protection structures can lead to accumulation of polymer pollutants in the bay. There are also some difficulties in acquisition of data on pollution level from the local authorities. It is the case for the Black Sea basin where rather few works were carried especially for the city bays. Understanding the circulation in the bay of city and appropriate measurements of MPs concentration allows to estimate the amount of polymer pollution released to the open sea.
Recently, we have conducted such studies in the bay of a large port city of the Russian part of the Black Sea coast – the Sevastopol. The bay is significantly stretched in the zonal direction and is an estuarine basin. The easternmost part of the bay occupied by the river mouth. The breakwaters in the western part can reduce water exchange with the open sea twice by some estimates.
The study investigates the circulation induced by the prevailing winds and storm surges out of the bay and the corresponding effect on MPs concentrations: removal, accumulation and redistribution. The circulation in the bay is reconstructed by means of NEMO modelling framework. Due to the very complex coastline a regional configuration has been developed, which allows numerical modelling in city bays on an irregular grid with a spatial resolution reaching 30 m in some places. The open boundary out of the bay is used. To assess the relevance of the developed configuration numerical experiments are carried out with a uniform wind field: constant and with a daily climate variability.
The study was carried out in FSBSI FRC MHI and supported by the Russian Foundation for Basic Research (Grant No. 18-55-45024, Grant No. 20-45-920019) and the state assignment (MHI #0827-2020-0004).
Subject of this talk are the mathematical challenges and the numerical treatment of large scale sea ice problems at high resolutions. The model under
consideration goes back to Hibler ("A dynamic thermodynamic sea
ice model", J. Phys. Oceanogr., Hibler 1979) and is based on a
viscous-plastic description of the ice as a two-dimensional thin layer
on the ocean surface.
We present the ICON sea ice model, which is part of the ICON Ocean model and consists of the first C-grid like finite-Element discretization of sea ice dynamics on triangles. The discretization is based on the Crouzeix–Raviart element with degrees of freedom at the edge midpoint of a cell. The advantage of the new staggering is the straight forward coupling to the ocean and the atmosphere model and its suitability for unstructured meshes.
In a study we compare the Crouzeix–Raviart element, Rannacher-Turek element, which is a realization of the Crouzeix–Raviart element on quadrilatrals to piecewise linear (P1) finite elements. The new discretization of sea ice dynamics is a promising spatial discretization as it shows in a comparison to piecewise linear (P1) finite elements, with degrees of freedom at vertexes of a cell, a similar number of linear kinematic features on a two times coarser mesh resolution and the same total deformation on four times coarser meshes.
Marine hydrodynamic models, in addition to simulating ocean physics, also act as a host for marine biogeochemical models. The latter can be typically, but not always, formulated as a set of partial differential equations, which complement the passive transport of matter concentrations by their local change through biogeochemical processes. Numerical solvers mostly use the mode-splitting method, alternating between local biogeochemical changes and advection or mixing. Since the set of ODEs which forms the ecosystem model mostly represents a strong simplification of reality anyway, the exact approximation of the unknown analytical solution is often considered less important compared to the preservation of properties of the ODE system. This talk will focus on the limitations this imposes to the architecture and numerics of the physical host models and on the two-way interaction between these and their biogeochemical guest components.
Many Earth System Models use libraries like ESMF or OASIS for coupling separate atmosphere and ocean models and others. These libraries provide spatial interpolation methods to support the data exchange at the air-sea interface. The coupling happens at pre-defined coupling time points.
For idealized studies of physical processes at the air-sea interface, it is of advantage to use holisitc atmosphere-ocean models, i.e. with one set of governing equations. Different horizontal discretizations are used for the ocean and atmosphere part.
Multirate time integration methods offer the integration with different time steps for the ocean and atmosphere component. The exchange at the air-sea interface is then conducted with the large time step of the slow ocean component, i.e. the coupling takes place at each integration step.
This presentation will show how multirate time integration schemes are applied for an idealized atmosphere-ocean model with a conservative interpolation of flux data at the air-sea interface.
We first discuss solutions for barotropic tidal rectifications in 1D configuration, with or without bottom friction. Exact or approximate solutions are found, which show a very strong effect of dissipation. Indeed, bottom friction for instance drastically changes the rectified current, even with very small bottom drag.
We thus also show that other dissipative processes, such as viscosity, can modify the rectified current.
As a result, we expect the numerical schemes and/or resolution (that fixes the order of magnitude of NUMERICAL viscosity) have a strong effect on the numerical solution of rectified currents. As a consequence, when numerical dissipation is above the physical one, the rectified current is spoiled.
Alternatively, we can estimate the viscosity associated with a resolution. This provides a way to calculate the minimum resolution needed to get a correct rectified current. It is very small in regions with strong tidal currents.
Internal-tide generation has been quantified using both pressure work and energy conversion. When calculating the pressure work from simulated or observed data, the internal-tide pressure has to be decomposed from the full pressure, for which various options exist. We show that the conversion, that has to be derived from the depth-integrated energy equations, contains the work done by both the form drag at the bottom and that at the surface, with the latter being about 1% of the former. For calculating the pressure work, the internal-tide pressure identified as the deviation from the depth-averaged pressure perturbation has to be used.
We analyzed the work done by the bottom form drag in STORMTIDE2, a concurrent simulation of circulations and tides. As expected, the identified internal-tide pressure reveals the characteristic pressure drop from the windward to the leeward side of an obstacle. The M2 internal-tide generation in STORMTIDE2 is more strongly controlled by the barotropic tide than by the topographic slope, partly because the tidal velocity can change up to one order of magnitude from the top to the foot of a high ridge within a short distance, a feature only produced by a high-resolution model. Consequently, the intense generation maps the immediate proximities of the summits of high ridges, making the global generation to be strongest near 1200 m and decreasing drastically below 3000 m. The depth structure of the generation differs in different basins, which could impact differently for the diapycnal mixing and the circulation in different basins.
Estuaries are characterised by fresh-water inflow from river run-off and inflow of saline sea water due to estuarine circulation. Inside the estuary, brackish water of intermediate salinity is formed by turbulent mixing, flowing out into the ocean through the mouth of the estuary. This mixing is quantified as decay of salinity variance in the estuary. In numerical models, according to Klingbeil et al. (2014), the effective mixing can be accurately decomposed into physical mixing (due to the turbulence closure scheme) and numerical mixing (due to truncation errors of the salinity advection scheme). Using the isohaline analysis framework proposed by Walin (1977), physical and numerical mixing per salinity class an be calculated (Burchard, 2019), which allows to accurately calculate the
diahaline turbulent salt transport. Isohaline volumes and surface areas and thus diahaline salinity gradients can be calculated for each isohaline surface, which in turn allows to estimate effective physical and numerical diahaline eddy diffusivities. These analysis methods are demonstrated for an idealised numerical model of a tidal estuary.
References:
Burchard, H. (2019). A universal law of estuarine mixing. J. Phys. Oceanogr., https://journals.ametsoc.org/doi/abs/10.1175/JPO-D-19-0014.1.
Klingbeil, K., Mohammadi-Aragh, M., Gräwe, U., & Burchard, H. (2014). Quantification of spurious dissipation and mixing–Discrete variance decay in a Finite-Volume framework. Ocean Modell., 81, 49-64.
Walin, G. (1977). A theoretical framework for the description of estuaries. Tellus, 29, 128-136.
The penetration of density inflows on the steep continental slopes plays an important role in the formation of its various characteristics, including the renewal of deep waters in deep-sea, their saturation with oxygen, the renewal of intermediate deep waters, the formation of vertical density stratification, the processes of convection, the trajectory and intensity of the flow of salt waters and some others. Despite the importance of these processes, their realistic modelling is fraught with several difficulties. One of which is the difference in the dynamic distribution of narrow and thin bottom water masses in comparison with the dynamic of the background fluid. The commonly used hydrostatic modelling approaches (primitive equations) are unable to reproduce with due accuracy the dynamic processes that arise on bathymetric features when vertical acceleration cannot be neglected. Under these conditions, hydrodynamics in such regions requires more complete non-hydrostatic equations.
Estimates of the role of non-hydrostatic correction on the slope dynamics are presented by comparison with hydrostatic approximation.
The resolution of Earth System models may be substantially increased in the next decade with the emergence of exascale supercomputer architectures, provided that mathematical and algorithmic developments for these models of the atmosphere, ocean, sea-ice and biogeochemistry can adequately handle heterogeneous node architectures supporting different levels of numerical precision combined with complex memory hierarchies. This requires us to carefully test precision sensitivity in archetypal geophysical flow scenarios, review the algorithmic choices and their implementation, and build confidence in the revised or newly proposed methods when reconsidering the need of double-precision floating-point arithmetic as a default choice.
Single-precision arithmetic, for example, has been successfully applied to atmospheric models where 40% reductions of computational cost have been observed. Equally, the potential for reduction of numerical precision in ocean models has recently been explored (GMD 12, 3135–3148, 2019). Given that ocean models are, like atmospheric models, memory and communication bound, the use of single- or mixed-precision can be expected to deliver similar performance gains. For both atmosphere and ocean, there are however remaining questions regarding the impact on variables with long-timescale memory, the dependence on subtle differences in weak gradient scenarios of mixed-phase fluids, and more generally transport uncertainty, which would benefit from a systematic intercomparison of both algorithmic choices and their precision sensitivity.
Here we present preliminary results from running the NEMO model with single-precision arithmetic, focusing on its computational and forecast performance, with respect to the double-precision model. We focus on a double-gyre, mesoscale eddy-resolving test case and use this example to motivate discussions on how consistent test cases could be constructed to test different ocean model configurations. As the use of reduced precision will generate complex non-linear feedbacks in eddy-permitting ocean models, it will be essential to relate and compare changes due to a precision reduction to changes of conventional model parameters or different models entirely. This will allow us to put the model response into the context of the model's algorithmic uncertainty. We find that, in some cases, single-precision can deliver a greater than 2x speed-up with respect to double-precision.
See Day 1 for all names and titles.
For 20 years, the ocean climate modeling community has been confronted with the pernicious problem of uncontrolled spurious mixing that arises from numerical truncation errors in the discrete advection operator. These errors can accumulate over climate time scales to degrade the simulated pycnocline and contribute to sizable water mass drifts and spurious heat uptake. The errors generally worsen when allowing the grid Reynolds number to be larger than unity, which is commonly the case in eddying simulations designed to aggressively optimize eddy energy. In this talk, we revisit the spurious mixing problem in the context of the vertical Lagrangian-remap method and hybrid vertical coordinates used by MOM6 and HYCOM. We offer a tutorial on the vertical Lagrangian-remap method that removes much of its mystery, and then argue that the method provides an ideal means to isolate the spurious mixing problem to the vertical direction, thus allowing for higher-order accurate schemes to greatly reduce spurious mixing when combined with a suitable vertical coordinate choice. We display a direct comparison from the GFDL OM4 global ocean/ice climate run at nominally 1/4-degree resolution and 75 vertical degrees of freedom, whereby the choice of vertical coordinate (geopotential versus hybrid isopycnal-geopotential) has a huge impact on the heat uptake and model drift.
I will present a semi-discrete, multilayer set of equations describing the three-dimensional motion of an incompressible fluid bounded below by topography and above by a moving free-surface. This system is a consistent discretisation of the incompressible Euler equations, valid without assumptions on the slopes of the interfaces. Expressed as a set of conservation laws for each layer, the formulation has a clear physical interpretation and makes a seamless link between the hydrostatic Saint-Venant equations, dispersive Boussinesq-style models and the incompressible Euler equations. The associated numerical scheme, based on an approximate vertical projection and multigrid-accelerated column relaxations, provides accurate and efficient solutions for all regimes. The same model can thus be applied to study metre-scale waves, even beyond breaking, with results closely matching those obtained using small-scale Euler/Navier-Stokes models, and coastal or global scale dispersive waves, with an accuracy and efficiency comparable to extended Boussinesq wave models. The implementation is adaptive, parallel and open source as part of the Basilisk framework (basilisk.fr).
The stirring and mixing of tracers by mesoscale eddies, parameterized in many models as a type of diffusion, contribute to the distribution of tracers in the ocean. In the stratified ocean interior, such processes occur mostly along the direction parallel to the local neutral density surface. However, near boundaries, small-scale turbulence tends to break this constraint so that the mesoscale eddy transport occurs mostly along a plane parallel to the boundary. We propose and implement two methods for representing diffusive mesoscale eddy fluxes within the surface boundary layer in a general vertical coordinate ocean model (MOM6). In the first method, a bulk diffusive flux is calculated by depth-averaging tracers from the surface to the surface boundary layer depth and then decomposing this flux into individual cells. In the second method, diffusive fluxes are calculated individually, layer-by-layer for all cells within the boundary layer. Both methods are tested in forced ocean/sea-ice simulations using the Community Earth System Model framework. We broadly find that the ‘bulk’ approach leads to stronger meridional heat transport in the surface ocean. Additional results showing how these schemes affect climate-relevant oceanic metrics, when compared to a control simulation, will be presented and discussed.
Stirring and mixing by mesoscale eddies lead to a downward cascade of tracer variance that is often parameterized as down-gradient diffusion. Within the stratified ocean interior, the so-called `neutral direction’ of these diffusive fluxes is aligned with surfaces of locally referenced potential density. However, model surfaces are in general not aligned with this neutral direction or, in the case of isopycnal models, are only approximate. The commonly used rotated tensor approach in many models is not positive-definite, leading to new extrema, and may induce diabatic mixing in weakly stratified regions. Here we describe and demonstrate a new operator for neutral diffusion that uses a vertically non-local stencil and calculates fluxes on layers bounded by neutral surfaces. In idealized test cases (implemented in the Modular Ocean Model version 6) the algorithm is demonstrated to be extrema-preserving with minimal dianeutral mixing for both linear and nonlinear equations of state (EOS). We further show that locally linearizing the thermal expansion and haline contraction coefficients is nearly as accurate as using the full nonlinear EOS while leading to significant computational performance.
Accurate and stable implementation of bathymetry boundary conditions remains a challenging problem. The dynamics of ocean flow often depend sensitively on satisfying bathymetry boundary conditions and correctly representing their complex geometry. Generalized (e.g. σ) terrain-following coordinates are often used in ocean models, but they require smoothing the bathymetry to reduce pressure gradient errors (Mellor et al., 1994). Geopotential z-coordinates are a common alternative that avoid pressure gradient and numerical diapycnal diffusion errors, but they generate spurious flow due to their “staircase” geometry. We introduce a new Brinkman volume penalization to approximate the no-slip boundary condition and complex geometry of bathymetry in ocean models. This approach corrects the staircase effect of z-coordinates, does not introduce any new stability constraints on the geometry of the bathymetry and is easy to implement in an existing ocean model. The porosity parameter allows modelling subgrid scale details of the geometry. We illustrate the penalization and confirm its accuracy by applying it to three standard test flows: upwelling over a sloping bottom, resting state over a seamount and internal tides over highly peaked bathymetry features. Preliminary results of realistic simulations of the Guf Stream detachment will also be discussed.
Wetting and drying processes in shallow water systems by surges, tides and seiches have important societal, physical and biological impacts. Operational regional models are now of sufficient resolution, O(1 km), that the processes of wetting and drying need to be included. Here we describe a flux limiter based approach that allows a numerical ocean model with a flux formulation of tracer advection to wet and dry. Following WarnerEtal13, the flux limiter approach limits the outflow from a cell whose depth is below a critical value defined by the user. The limiter can be a step function or a smooth function of the water depth flux limiter, the latter increases model stability and avoid rapid alternation between dry and wet states on long slopes as the critical depth is approached. Furthermore, the user may proportionally limit the baroclinic fluxes as a cell transitions from wet to dry over the course of the large baroclinic time step. The simplicity of the flux limiter approach lends itself to its application within existing numerical models without significant intrusion into the code base. Here we explore the scheme's effectiveness, sensitivities and limitations within the 3D NEMO ocean model by assessing it using test cases of increasing complexity. It is shown to perform well in classic channel test cases and 2D parabolic test cases with analytic solutions. Its performance against analytical 1D dam break experiments is explored and used to interpret its performance against laboratory measurements of a 2D dam break. The scheme is also shown to run stably for a realistic 3D regional domain of the North West European shelf and to improve some aspects of the model's performance against tide gauges.
We present several time dependent analytical solutions for the incompressible Euler system with free surface. These analytical solutions are designed to have a basis for comparison when validating numerical simulation codes. They concern fluid flows gov- erned by Euler equations with or without hydrostatic hypothesis including wet/dry interface, variable density, wide variety of boundary conditions and possible variable bottom topography. Some of these analytical solutions can also be used to validate shallow water model or free surface Navier Stokes model.
The current study provides a new test case for the coastal numerical solutions dedicated to the plume spreading in the estuary and on the shelf. The suggested estuary-shelf system represents a mixture between pronounced nonlinear flow dynamics with sharp frontal boundaries and linear dynamics and across-shore geostrophic balance. The major aspects of the plume dynamics are analytically predicted, but are difficult to reproduce numerically. The test case manifests the level of numerical diffusion, ability of the model to reproduce the nonlinear processes and frontal zone dynamics. Numerical solutions were obtained with three unstructured mesh models SCHISM, THETIS and FESOM-C. The current study also suggests the plume spreading analysis based on numerical results, which can be useful for any intercomparison studies with focus on the plume behavior.
As part of a new climate model development effort (CliMa), an efficient LES model
has been developed at MIT in order to simulate turbulent flow in simple geometry
configurations.
One of the target application is to explicitly resolve turbulent boundary layers
in a wide range of oceanic conditions in order to help to improve mixing
parameterizations used in GCMs.
The model, Oceananigans, is based on similar numerics to the MIT General
Circulation Model (MITgcm), but relies on a more efficient modal pressure solver
applicable to simple domain geometries.
Written in Julia, Oceananigans runs efficiently on both CPUs and GPUs
and includes several recent LES sub-grid scale closures.
A brief overview of the model is presented and results from few standard LES
test-cases show good agreement with either analytic solution or well established
model results.
In the broader context of model test-cases, this illustrates that for idealized
configurations with simple geometry, reliable tests are available to validate
and evaluate individual model component. The need for robust tests addressing
more complex, ocean science motivated, modeling problems is briefly discussed
from the experience gained with MITgcm.
The MOM6 ocean model is now a community open development model. With open development, contributions are much more frequent, and come from a much more diverse group of collaborators than with the older curated open source approach; the traditional use of a single person to act as a gatekeeper and take responsibility for model quality control is no longer a viable approach. Instead, to accommodate open development, we have developed an extensive automated testing protocol for contributions to MOM6 in order to maintain ocean model code quality and to detect and eliminate many types of bugs. MOM6 code is automatically tested for exact reproduction of all solutions and diagnostics across parallel decomposition, restarts, exact rotational symmetry, dimensional consistency, memory usage patterns, and a wide range of compiler settings. In each case, these tests unambiguously pass or fail. Together these tests prevent the introduction of many types of software bugs and algorithmic inconsistencies, and provide the level of quality control required for a modern ocean model codebase to be rapidly developed under an open development paradigm. This talk will describe the wide range of techniques that are used and some of the specific challenges that had to be overcome in achieving these demonstrations of perfect self-consistency in MOM6.
Moderator: Laurent Debreu
The Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM), which uses unstructured grids, is set up for the North and Baltic Seas. With a resolution of ∼100m in the narrow straits connecting the two basins, this model accurately resolves the inter-basin exchange. Validation against observations in the straits shows the model has good skill in simulating the transport and vertical profiles of temperature, salinity and currents. The timing and magnitude of the major inflow event in 2014–2015 is also realistically simulated. The analysis is focused on the two-layer exchange, its dependence on the atmospheric forcing, and dominant physical balances. The two-layer flows in the three connecting straits show different dependencies upon the net transport. The spatial variability of this dependence is also quite pronounced. The three-strait system developed specific dynamics, with time lags and differences between currents in the individual straits during inflow and outflow conditions. Analysis on the impact of resolution indicates that the performance of the model changes depending on whether the narrow parts of the straits are resolved with a resolution of 500m or 100 m. With this ultra-fine resolution, gravity flows, secondary circulation and variability of salinity in deep layers is generally more adequately simulated. This paper identifies the needs for more profound analysis of the coupled dynamics of Baltic and North Seas with a focus on the Danish straits.
Thetis is a three-dimensional coastal ocean model. It solves the hydrostatic equations coupled with the Generic Length Scale (GLS) turbulence closure model. Thetis is implemented on the Firedrake finite element modeling framework, where governing equations are declared with a Domain Specific Language (DSL). At runtime, a just-in-time compiler generates efficient C code for the model's kernels. By default, Thetis uses a linear Discontinuous Galerkin (DG) discretization, which has been evaluated with idealized benchmarks (Kärnä et al, 2018), and also used in realistic applications such as the Columbia River plume (Kärnä, 2019). The results indicate that numerical mixing is low, i.e. similar to existing structured-grid global ocean models, and lower compared to typical unstructured grid coastal models. The DSL framework, however, allows rapid testing of different spatial discretization choices. In this presentation, we evaluate a mimetic finite element pair, RT2-P1DG, which uses Raviart-Thomas (RT) elements for the velocity. Contrary to the default P1DG-P1DG pair, the RT2-P1DG pair is energy– and enstrophy–conserving. It is, however, more expensive as the velocity space is formally quadratic. We show that the RT discretization does indeed better preserve eddies and also reduces internal pressure gradient errors with terrain-following grids.
We present our ongoing work on developing a new 3D nonhydrostatic model, using a Finite-element method on unstructured grids, to address a wide range of scales. We show that the commonly encountered restriction on grid aspect ratio (between the vertical and horizontal grid sizes) can be circumvented with proper pre-conditioners. Benchmark tests are shown for short wave propagation and shoaling over obstacles to demonstrate the accuracy and efficiency.
The tuning phase of IPSL-CM6A-LR, a climate model of CMIP6, lasted for 4 years, during which we explored different numerical recipes controlling ocean vertical mixing, among others. Analysis of all simulations is still ongoing, but two lessons can be learned so far. [1] After more than 2,000 yr of integration (using pre-industrial external forcings), the deep ocean has not reached an equilibrium, yet. [2] Sensitivity experiments exploring structural and parametric uncertainties indicate that some intrinsic climatic features of this model are quite robust. Overall, this suggests that we currently have little control on the backbone of numerical oceans.
The potential of a kinetic energy backscatter scheme is demonstrated in an eddy-permitting global ocean simulation. Ocean models commonly employ (bi-)harmonic eddy viscosities which excessively dissipate kinetic energy in eddy-permitting simulations. Over-dissipation not only affects the smallest resolved scales, but also the generation of eddies through baroclinic instabilities, impacting the entire wavenumber spectrum. The backscatter scheme returns part of this over-dissipated energy back into the resolved flow.
Backscatter is employed in the FESOM2 multiresolution ocean model with a quasi-uniform 1/4° mesh. In multidecadal ocean simulations, backscatter increases eddy activity by a factor 2 or more, moving the simulation closer to observational estimates of sea surface height variability. Moreover, mean sea surface height, temperature, and salinity biases are reduced. This amounts to a globally averaged bias reduction of around 10\% for each field, which is even larger in the Antarctic Circumpolar Current.
However, in some regions such as the coastal Kuroshio, backscatter leads to a slight over-energizing of the flow, and in the Antarctic to an unrealistic reduction of sea ice. Some of the bias increases can be reduced by model tuning and related adjustments to the backscatter scheme. Nevertheless, the substantial observed improvements from the backscatter scheme already outweigh increases in bias or costs.
Density is an essential parameter in ocean models and so is vital for understanding ocean dynamics. However, primitive equations cannot describe the flow of density. Instead, density is modeled as a nonlinear function of temperature, salinity, and pressure. The quantities stored in ocean models represent average quantities over a grid cell. Averaging nonlinear functions leads to errors. Thus, the density calculated by the model is not the same as the true average density. Jean-Michel Brankart proposed a stochastic correction to this error, which he used in the hydrostatic equation to compute the pressure gradient force [1]. His correction samples the difficult to evaluate equation of state several times, and thus is computationally expensive.
This work proposes (1) a novel way to correct the density errors incurred by averaging the nonlinear equation of state and (2) a new application of the correction. We found an analytic expression for a correction to the error. Our method requires only one evaluation of a nonlinear function and thus provides great computational savings over previous methods. We propose to use our correction in the computation of the pressure gradient force and in the computation of isopycnal slopes in the Gent-McWilliams parameterization of eddy-induced transport. Free parameters in our correction are assessed using model output from a high resolution (0.1 degree) eddy resolving Community Earth Systems Model 2 (CESM2) ocean model run [2]. These data constitute a 33-year time series of over 70 three-dimensional fields, saved every five days for a total of 2,400 files.
Scale invariance of geophysical fluids is investigated in terms of a scale invariance criterion. It was developed by Schaefer-Rolffs et al. (2015) based on the implication that each scale invariant subrange shall have its own criterion. Two particular cases are considered, namely the synoptic scales with a significant Coriolis term and a case at smaller scales where the anelastic approximation is valid. The first case is characterized by a constant enstrophy cascade, while in the second case small-scale fluctuations of density, pressure, and temperature are taken into account. In both cases, the respective scale invariance criteria are applied to simple parameterizations of turbulent diffusion. It is demonstrated that only dynamic approaches are scale invariant.
The balanced state in geophysical flows largely describes the dynamics and energetics of these flows.The description and diagnosis of this balanced state however remains controversial. In models, the numerics of the model as well as the balancing procedure used greatly effects the obtained balanced state and the errors involved.
We use two different balancing procedures in an identical model setup and find differences in the resultant balanced state. The first procedure we implement is a non-linear initialisation procedure for higher orders in Rossby number, as in Eden et al. 2019. The second procedure implemented is the optimal potential vorticity balance to achieve the balanced state, as Tuba and Oliver (2019, in prep.). The results show that the numerics of the model affect the obtained balanced state from the two procedures. The numerical error can be seen in the residual signal which we interpret as the unbalanced motions, i.e. internal gravity waves. Therefore, it is crucial to consider the effect of the numerics in models and make a suitable choice of the balancing procedure to obtain the balanced state.