| Literature DB >> 32292231 |
Abstract
In this paper, we consider the problem of accelerating the numerical simulation of time dependent problems by time domain decomposition. The available algorithms enabling such decompositions present severe efficiency limitations and are an obstacle for the solution of large scale and high dimensional problems. Our main contribution is the improvement of the parallel efficiency of the parareal in time method. The parareal method is based on combining predictions made by a numerically inexpensive solver (with coarse physics and/or coarse resolution) with corrections coming from an expensive solver (with high-fidelity physics and high resolution). At convergence, the algorithm provides a solution that has the fine solver's high-fidelity physics and high resolution. In the classical version, the fine solver has a fixed high accuracy which is the major obstacle to achieve a competitive parallel efficiency. In this paper, we develop an adaptive variant that overcomes this obstacle by dynamically increasing the accuracy of the fine solver across the parareal iterations. We theoretically show that the parallel efficiency becomes very competitive in the ideal case where the cost of the coarse solver is small, thus proving that the only remaining factors impeding full scalability become the cost of the coarse solver and communication time. The developed theory has also the merit of setting a general framework to understand the success of several extensions of parareal based on iteratively improving the quality of the fine solver and re-using information from previous parareal steps. We illustrate the actual performance of the method in stiff ODEs, which are a challenging family of problems since the only mechanism for adaptivity is time and efficiency is affected by the cost of the coarse solver.Entities:
Keywords: Convergence rates; Domain decomposition; Inexact fine solver; Parallel efficiency; Parareal in time algorithm; a posteriori estimators
Year: 2020 PMID: 32292231 PMCID: PMC7155213 DOI: 10.1016/j.cam.2020.112915
Source DB: PubMed Journal: J Comput Appl Math ISSN: 0377-0427 Impact factor: 2.621
Fig. 1Mapping of the accuracies against the tolerance parameters () of the library. The dots are computed values: for a given value of the parameter, we examine the accuracy of the solver. We then interpolate the points with a cubic spline interpolation. This way, for a given intermediate accuracy in the algorithm, we can infer the parameter value and . Case , integrator of the fine solver.
Fig. 2Speed-up in comparison to running a sequential fine solver as a function of the number of processors . Dashed lines: classical parareal. Continuous lines: Adaptive parareal.
Brusselator: Impact of the cost of the coarse solver. Speed-up and efficiency with , and .
| Classical parareal | Adaptive parareal | |
|---|---|---|
| With cost | 4.06 | 7.38 |
| Without cost | 7.38 | 37.76 |
| Classical parareal | Adaptive parareal | |
| With cost | 8% | 14.76% |
| Without cost | 14.76% | 75.52% |
Fig. 3Brusselator: Convergence history of the errors for , and . Top: classical parareal. Bottom: adaptive parareal. Left: errors of the fine solver at every fine time-step. Right: maximum parareal error at each iteration .
Fig. 4Van der Pol: speed-up in comparison to running a sequential fine solver as a function of the number of processors . Dashed lines: classical parareal. Continuous lines: Adaptive parareal.
Van der Pol: Impact of the cost of the coarse solver. Speed-up and efficiency with , and .
| Classical parareal | Adaptive parareal | |
|---|---|---|
| With cost | 4.54 | 11.14 |
| Without cost | 6.61 | 32.63 |
| Classical parareal | Adaptive parareal | |
| With cost | 11.35% | 27.8% |
| Without cost | 16.5% | 81.56% |