Benchmarking Parallel Performance of Numerical MPI Packages

Description of the project: This project aims to set up parallel/HPC performance testing for our MPI numerical packages. The project has two goals. Firstly, to provide benchmarking of debian MPI packages such as fenicsx (finite element computation) to demonstrate scalability of the software in cloud computing environments. Secondly, to provide a type of CI testing ensuring that MPI scalability is maintained as MPI- or computation-related packages such as OpenMPI, PETSc, OpenBLAS are upgraded. We aim to set up benchmarking tests to run regularly and report over varying test conditions, for instance comparing performance with different BLAS implementations. It would be useful to be able to quantify how well our HPC packages actually scale (in cloud computing environments) and monitor if there is any drop in performance (e.g. with version updates). It will also be useful to report their performance with the various BLAS alternatives.

Some packages have benchmarks already at hand. The FEniCS project for instance offers fenicsx-performance-tests (both prebuilt and source). The outcome of the project will be the preparation of a protocol for setting up and launching MPI CI testing on an appropriate cloud computing service, identifying which parameters (test size etc) must be used to get meaningful numbers. A suggested tool for managing test parameters and results might be https://reframe-hpc.readthedocs.io/en/stable/ The report format may be similar to https://fenics.github.io/performance-test-results/ , with a web-based UI (i.e. buttons) to select different reports.