SFUMATO#: a GPU accelerated code for Self-Gravitational Radiation Hydrodynamics Simulation with Adaptive Mesh Refinement
Hajime Fukushima, Tomoaki Matsumoto
TLDR
SFUMATO# is a new GPU-accelerated code for self-gravitational radiation hydrodynamics with AMR, featuring improved chemistry and thermal solvers.
Key contributions
- Implements new linearized implicit solvers for non-equilibrium chemistry and thermal evolution, validated for accuracy.
- Evolves dust temperature without iterative energy-balance calculations by incorporating dust grain heat capacity.
- Accelerates the chemistry solver by increasing pseudo dust heat capacity without compromising accuracy.
- Supports multi-GPU execution via MPI, with performance analysis for optimal self-gravity solver scaling.
Why it matters
This paper significantly advances astrophysical simulations with SFUMATO#, a GPU-accelerated code for complex self-gravitational radiation hydrodynamics. Its novel implicit solvers and dust temperature handling improve accuracy and computational efficiency, offering crucial insights for optimizing large-scale multi-GPU simulations.
Original Abstract
We present a new implementation of the SFUMATO code, called SFUMATO#, for solving self-gravitational radiation hydrodynamics problems using adaptive mesh refinement (AMR) with the CUDA/HIP programming frameworks. The code incorporates a multigrid solver for self-gravity, radiation transfer with M1 closure and reduced speed of light approximation, non-equilibrium chemistry, thermal evolution, and sink particle schemes. We develop new non-equilibrium chemistry and thermal solvers based on a linearized implicit method, whose accuracy is validated through a series of test problems by comparison with solutions obtained using the Newton-Raphson method. By incorporating the heat capacity of dust grains, the dust temperature can be evolved without iterative energy-balance calculations. From the perspective of computational cost, we demonstrate that adopting an increased pseudo dust heat capacity accelerates the chemistry solver while preserving accuracy, even when the value is increased by up to three orders of magnitude relative to the realistic value. In addition, we perform a suite of test problems to confirm the validity of the other components of our implementation. The code supports multi-GPU execution via MPI-based parallelization. We measure the strong-scaling performance of the hydrodynamics and self-gravity solvers on both uniform and AMR grids, as well as the overall code performance using a giant molecular cloud simulation. We find that the computational cost of the self-gravity solver increases with the number of MPI processes, indicating that efficient parallel performance is achieved only when the number of devices is chosen such that the cost of the self-gravity solver remains comparable to that of the other components.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.