Abstract: | Often one encounters dynamical systems containing a wide range of natural frequencies. These multiple-time-scale systems, which are modeled using stiff ordinary differential equations, are well known to present a significant challenge in obtaining a numerical solution efficiently. Multiple-time-scale systems can be broadly classified into two classes: (1) systems with well-separate discrete time scales, such as molecular dynamic simulations and electrical networks, and (2) systems with a continuously-distributed range of time scales, such as aerosol dynamics, multiscale structural dynamics and turbulent fluid flow. For the numerical simulation of systems with well-separated discrete time scales one can frequently average over the fast time scales, either analytically or numerically. This results in effective models with only slower time scales and allows efficient numerical simulations with large timesteps. In cases where this is not possible, either due to system complexity or the fact that there is simply a wide range of timescales with no clear scale separation, such as the continuously-distributed time scales systems, it has traditionally been necessary to simulate the entire system at the rate of the fastest timescale, which can be very expensive.
To efficiently simulate multiple-time-scale systems, many researchers have developed multiple-time-step numerical integration methods, where more than one timestep are used. These advance different components of the system forward in time at different rates, so that faster components can use small timesteps, while slower components can use large timesteps, resulting in lower computational cost. Most multiple-time-step integrators only apply to systems with discrete time scales, where subcycling methods, mollified methods, and r-RESPA are good examples. In addition, these methods which have several numerical timesteps require that timestep ratios be integer multiples of each other. In contrast, one family of multiple-time-step methods does not attempt to enforce any such restrictions, namely asynchronous integrators. These methods have incommensurate timesteps, such that all system components never synchronize at common time instants. This feature allows some asynchronous methods to be efficiently applied to systems with continuously-distributed time scales, where every time scale can have an appropriately-chosen numeral timestep. However, currently known asynchronous methods are at most second-order accurate and are known to suffer from resonance instabilities, severely limiting their practical efficiency gain relative to their synchronous counterparts.
In the present work, a new family of high-order Multistep Asynchronous Splitting Methods (MASM) is developed, based on a generalization of both classical linear multistep methods and the previously-known Asynchronous Splitting Methods (ASMs). These new methods compute high-order trajectory approximations using the history of system states and force vectors as for linear multistep methods, while at the same time allowing incommensurate timesteps to be used for different system components as in ASMs. This allows them to be both high-order and asynchronous, and means that they are applicable to systems with either discrete time scales or continuously-distributed time scales.
Consistency and convergence are established for these new high-order asynchronous methods both theoretically and via numerical simulations. For convergence, the only requirement is that the ratio of smallest to largest timestep remains bounded from above as the timesteps tend to zero. For a sufficiently regular ODE systems, an $m$-step MASM can achieve m^th order accuracy, which is proven analytically and then validated using numerical experiments. Numerical simulations show that these methods can be substantially more efficient than their synchronous counterparts. Given that appropriate timesteps are chosen, the efficiency gain using MASM compared to synchronous multi-step methods largely depends upon the force field splitting used.
MASM is proven to be a stable method, provided it is convergent. The stability criterion also strongly depends upon the splitting of the force field chosen. In case of linear systems for which the Jacobian of the force vector is diagonalizable, the force vector splitting can be classified into asynchronous splitting, where each eigen-component is lumped with one of the component force vector, and \emph{time scale splitting}, where an eigen-component is split between two or more component force vectors. Any force vector splitting is in general a combination of asynchronous splitting and time scale splitting, where some eigen-components are lumped with one of the component force vectors and other eigen-components are split between different component force vectors. For synchronous splitting we prove a stability condition with a bound essentially the same as for the corresponding synchronous multi-step method, while for time scale splitting we restrict the analysis to two-component systems and derive stability conditions for both conservative and non-conservative systems.
Finally, we also present an efficient time step selection (TSS) strategy that can be employed while using MASM for numerically solving ODEs, yielding the TSS-MASM method. This time step selection strategy is based on an optimal equidistribution principle, where component timesteps are chosen so that the contribution from each split force field towards the local discretization error is equal. The efficiency gain is system dependent and splitting dependent and is investigated numerically. For strongly coupled systems such as a multi-scale spring mass damper, TSS-MASM has approximately the same efficiency as synchronous multistep methods, while for weakly coupled systems such as aerosol condensation, TSS-MASM is much more efficient. |