Development of a Virtual Cruise Guide Indicator (vCGI) for Rotorcraft
Brewer, Wesley (DoD HPCMP PET/GDIT)
Intersection of Digital Engineering and High Performance Computing/High End Computing
"VIRTUAL SENSOR" methods have been proposed for future Army rotorcraft platforms or future upgrades as a means of monitoring or extending component lives. Moreover, they could also enable mission planners in acquisition programs to assess performance before a new aircraft or upgrade is completed using digital models. Virtual Sensor methods make use of data derived from flight tests, typically using machine learning techniques. Cruise guide indicators (CGI) used on helicopters today provide pilots a visual indication of stress loads on critical components during flight based on on-aircraft strain-gauges. In contrast, a virtual CGI (vCGI) takes in standard flight data and, based on training performed during flight test, virtually determine the stresses using a machine learning model. Fig. 1 shows an example a cruise guide indicator that is used as a pilot instrument in a helicopter. There are three regions; normal operation (green), transient (yellow), and avoid (red). The CGI is computed as a function of both fore and aft fixed link oscillatory loads.
In this work, we investigate the development of two neural networks to model both the fore and aft oscillatory loads, from which we derive the vCGI. We trained models using data from 55 flight tests measured at both 16Hz and 250Hz. The data consists of 74 features, such as: pitch angle, radar altitude, engine fuel flow, rate of climb, engine torque, etc. We will show how the model performs on full flight tests for various types of maneuvers. We investigated different types of neural network architectures including: multi-layer perceptrons (MLP), temporal convolutional neural networks (TCNN), Long Short- Term Memory (LSTM), and transformer architectures as well as ensembles of these models. We were able to develop a model that achieves greater than 95% accuracy for predicting the cruise guide indicator over the entire flight test for a number of maneuvers, but not as well on more difficult maneuvers such as autorotation. Several technical approaches allowed us to make significant improvements in the model: (1) the implementation of Yeo and Johnson's technique for normalizing the data before feeding it into the neural network, (2) converting the input features to overlapping time sequences with a four-sample window typically of four samples per training sequence, (3) using KerasTuner with the HyperBand technique for hyper-parameter optimization, and (4) implementing various types of dataset culling techniques. We trained the models using Nvidia V100 GPUs on Onyx-MLA, SCOUT, and Vulcanite HPCs. We will show how the model compares with full flight tests with a number of different maneuvers including pullup, dive, climb, hover, pushover, partial power descent, vertical takeoff, autorotation, doublets, and level flight acceleration and deceleration.
This material is based upon work supported by, or in part by, the Department of Defense High Performance Computing Modernization Program (HPCMP) under User Productivity, Enhanced Technology Transfer, and Training (PET) contract # 47QFSA18K0111, Award PIID 47QFSA19F0058. Any opinions, finding and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the DoD HPCMP.