Towards Performant Integration of Machine-Learned Surrogates with Simulations on HPCs
Boyer, Mathew (GDIT/HPCMP PET)
Co-Authors:
Brewer, Wesley
Dettwiller, Ian
Category:
Incorporation of GPUs/Accelerators into Physics-based Codes
The integration of machine learning with simulation is a growing trend, however, the augmentation of codes in a highly-performant, distributed manner poses a software development challenge. In this work, we explore the question of how to easily augment a legacy simulation code on a high-performance computer (HPC) with a machine-learned surrogate model incurring minimal slowdown and allowing for scalability. Initial naïve augmentation attempts required significant code modifications, while also incurring significant slowdown. This led us to explore using inference serving techniques, which allow for inference through drop-in functions. In this work, we investigated TensorFlow Serving with gRPC and RedisAI with SmartRedis for server-client inference implementations, where the deep learning platform is run as a persistent process on the GPUs of HPC compute nodes and the simulation makes client calls while running on the CPUs. We evaluated inference performance for several use cases at scale on Summit, including rotorcraft aerodynamics, real gas equations of state, and super-resolution techniques. Additionally, a machine-learned boundary condition was implemented in a CFD solver where the unsteady wake of a rotor modeled by a physics-informed neural network is injected as an inflow boundary condition. We will discuss key findings on performance across Python, C++, and Fortran APIs for inference clients and compare the results to ideal performance (using TensorFlow C-API in-process inference). The lessons learned may provide useful advice for researchers to augment their simulation codes in an optimal manner.