Machine Learning Research Benchmark Development

High performance computing technology and software workloads are rapidly changing with the maturation of machine learning, and benchmarks are a necessary part of evaluating new systems. Current scientific and machine learning benchmarks (e.g. MLPerf) are a useful baseline, but they don’t reflect many common scientific use cases; for example, scientific machine learning codes may use different data structures, encounter high-curvature loss landscapes, require integration into highly optimized numerical codes, and have unique requirements relating to conservation properties and invariances.

In this talk, we will overview recent progress at HPCMP in developing and deploying machine learning benchmarking tasks. We will introduce the tasks, discuss how these benchmarks exercise new GPU features and reflect the types of machine learning workloads common in current data-driven computational physics and fluid dynamics research, and finally present preliminary results on several HPCMP systems.

PRESENTER

Sharma, Alisha
alisha.j.sharma.civ@us.navy.mil
518-635-0533

Naval Research Laboratory

CO-AUTHOR

Stehley, Talya
talya.h.stehley.civ@us.navy.mil

CATEGORY

Artificial Intelligence / Machine Learning usage for HPC Applications

SYSTEMS USED

Multiple

SECRET

No