Modern computational fluid dynamics with Trixi.jl

Trixi.jl is a Julia package for adaptive high-order numerical simulations of conservation laws. It is designed to be simple to use for students and researchers, extensible for research and teaching, as well as efficient and suitable for high-performance computing.

Compiler-based automatic differentiation with Enzyme.jl

Difficulty: Medium (up to hard, depending on the chosen subtasks)

Project size: 175 hours or 350 hours, depending on the chosen subtasks

Enzyme.jl is the Julia frontend of Enzyme, a modern automatic differentiation (AD) framework working at the level of LLVM code. It can provide fast forward and reverse mode AD and - unlike some other AD packages - supports mutating operations. Since Trixi.jl relies on mutating operations and caches for performance, this feature is crucial to obtain an implementation that works efficiently for both simulation runs and AD.

The overall goal of this project is to create a working prototype of Trixi.jl (or a subset thereof) using Enzyme.jl for AD, and to support as many of Trixi's advanced features as possible, such as adaptive mesh refinement, shock capturing etc.

Possible subtasks in this project include

Related subtasks in this project not related directly to Enzyme.jl but using other packages include

This project is good for both software engineers interested in the fields of numerical analysis and scientific machine learning as well as those students who are interested in pursuing graduate research in the field.

Recommended skills: Good knowledge of at least one numerical discretization scheme (e.g., finite volume, discontinuous Galerkin, finite differences); initial knowledge in automatic differentiation; preferably the ability (or eagerness to learn) to write fast code

Expected results: Contributions to state of the art and production-quality automatic differentiation tools for Trixi.jl

Mentors: Hendrik Ranocha, Michael Schlottke-Lakemper

Exploration of GPU computing

Difficulty: Medium (to hard, depending on the chosen subtasks)

Project size: 175 hours or 350 hours, depending on the chosen subtasks

GPUs can provide considerable speedups compard to CPUs for computational fluid dynamic simulations of the kind performed in Trixi.jl. Julia provides several ways to implement efficient code on GPUs such as CUDA.jl for Nvidia GPUs, AMDGPU.jl for AMD GPUs, and KernelAbstractions.jl, which provides a single frontend to generate code for multiple GPU backends. In this project, we will likely work with CUDA.jl due to its maturity, but other options can be explored later as well.

The goal of this project is to implement a working subset of the functionality of Trixi.jl on GPUs, starting with a basic numerical scheme on Cartesian meshes in 2D. Based thereon, there are a lot of possibilities for extensions to more complex geometries and sophisticated discretizations.

Possible subtasks in this project include

This project is good for both software engineers interested in the fields of numerical analysis and scientific machine learning as well as those students who are interested in pursuing graduate research in the field.

Recommended skills: Background knowledge in numerical analysis, working knowledge about GPU computing, and the ability to write fast code

Expected results: Draft of a working subset of the functionality of Trixi.jl running efficiently on GPUs.

Mentors: Michael Schlottke-Lakemper, Hendrik Ranocha