伟德国际_伟德国际1946$娱乐app游戏

图片

Snapshot: Trixi.jl HPC performance tested for up to 61,440 MPI ranks

HPC numerical codes have been largely developed using Fortran or C due to their speed. More accessible languages, like Matlab or Python have found little use for such codes due to their poorer performance. Julia offers both, speed and accessibility for serial calculations and on small clusters. But how a Julia/MPI code scales on large HPC facilities has not been tested yet.

?




Using the Julia code Trixi.jl we test if Julia scales well on large HPC clusters. Trixi.jl uses the MPI library for parallelization and is well suited for this test. Using resources from the Jülich Supercomputing Centre we run simulations on up to 61,440 MPI ranks on 480 compute nodes for a Taylor-Green vortex

problem in three dimensions. We compare the results with the Fortran code FLUXO and see that Trixi.jl scales well for all used MPI ranks and outperforms FLUXO.

?

Degrees of freedom updates per second in dependence of number of MPI ranks for Trixi.jl and FLUXO. CC BY-NC-ND

Using p4est meshes for our simulations we implemented flexible multiphysics coupling across interface boundaries in Trixi.jl. Unlike structured meshes, p4est meshes can be much more flexible. They do not require to be rectangular or even simply connected. As a test case we show here two meshes. One writes the word "Trixi.jl" and the other is its complement. We couple an MHD system with an Euler system and initialize the domain with a linear pressure wave that travels to the left.
mhd_euler_t0
Coupled Euler-Mhd system using two p4est meshes. Shown is the gas density at time t = 0. ? 伟德国际_伟德国际1946$娱乐app游戏 of Augsburg
mhd_euler_t34
Coupled Euler-Mhd system using two p4est meshes. Shown is the gas density at time t = 34. ? 伟德国际_伟德国际1946$娱乐app游戏 of Augsburg

Search