Reinforcement Learning for Structural Health Monitoring based on Inspection Data
Simon Pfingstl, Yann Niklas Schoebel, Markus Zimmermanndownload PDF
Abstract. Due to uncertainty associated with fatigue, mechanical structures have to be often inspected, especially in aerospace. In order to reduce inspection effort, fatigue behavior can be predicted based on measurement data and supervised learning methods, such as neural networks or particle filters. For good predictions, much data is needed. However, often only a small number of sensors to collect data are available, e.g., on airplanes due to weight limitations. This paper presents a method where data that is collected during an inspection is utilized to compute an update of the optimal inspection interval. For this purpose, we describe structural health monitoring (SHM) as a Markov decision process and use reinforcement learning for deciding when to inspect next and when to decommission the structure before failure. In order to handle the infinite state space of the SHM decision process, we use two different regression models, namely neural networks (NN) and k-nearest neighbors (KNN), and compare them to the deep Q-learning approach, which is state of the art. The models are applied to a set of crack growth data which is considered to be representative of the general damage evolution of a structure. The results show that reinforcement learning can be utilized for such a decision task, where the KNN model leads to the best performance.
Reinforcement Learning, Structural Health Monitoring, Crack Growth, Inspection Timing
Published online 2/20/2021, 8 pages
Copyright © 2021 by the author(s)
Published under license by Materials Research Forum LLC., Millersville PA, USA
Citation: Simon Pfingstl, Yann Niklas Schoebel, Markus Zimmermann, Reinforcement Learning for Structural Health Monitoring based on Inspection Data, Materials Research Proceedings, Vol. 18, pp 203-210, 2021
The article was published as article 24 of the book Structural Health Monitoring
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
 Subra Suresh, Fatigue of Materials, Second edition, Cambridge University Press, Cambridge, 1998.
 Stephen J. Burns, The Theory of Materials Failure, by Richard M. Christensen, (56), 2015.
 International Civil Aviation Organization (ICAO), Airlines Operating costs and productivity, Teheran, 2017.
 D. A Virkler, B. M. Hillberry, P. K. Goel, The Statistical Nature of Fatigue Crack Propagation., Journal of Engineering Materials and Technology 101, 1979, pp. 148–153.
 Mohammad M. AlDurgam, Salih O. Duffuaa, Optimal joint maintenance and operation policies to maximise overall systems effectiveness, International Journal of Production Research 51 (5), 2013, pp. 1319–1330. https://doi.org/10.1080/00207543.2012.659351
 G. K. Chan, S. Asgarpoor, Optimum maintenance policy with Markov processes, Electric Power Systems Research 76 (6-7), 2006, pp. 452–456. https://doi.org/10.1016/j.epsr.2005.09.010
 Francisco S. Melo, Convergence of Q-learning: a simple proof, Institute for Systems and Robotics, Lisboa, Portugal.
 K. Zarrabi, W. W. Lu, A. K. Hellier, An Artificial Neural Network Approach to Fatigue Crack Growth, AMR 275, 2011, pp. 3–6. DOI: https://doi.org/10.4028/www.scientific.net/AMR.275.3
 Hassaan Bin Younis, Khurram Kamal, Muhammad Fahad Sheikh, Amir Hamza, Tayyab Zafar, Prediction of fatigue crack growth rate in aircraft aluminum alloys using radial basis function neural network, Archieves of Computational Materials Science and Surface Engineering, pp. 825–830. https://doi.org/10.1109/ICACI.2018.8377568
 R. M. V. Pidaparti, M. J. Palakal, Neural network approach to fatigue-crack-growth predictions under aircraft spectrum loadings, Journal of Aircraft 32 (4), 1995, pp. 825–831. https://doi.org/10.2514/3.46797
 J. R. Mohanty, B. B. Verma, D.R.K. Parhi, P. K. Ray, Application of artificial neural network for predicting fatigue crack propagation life of aluminum alloys, Archives of Computational Materials Science and Surface Engineering (1/3), 2009, pp. 133–138.
 Wei Zhang, Zhangmin Bao, Shan Jiang, Jingjing He, An Artificial Neural Network-Based Algorithm for Evaluation of Fatigue Crack Propagation Considering Nonlinear Damage Accumulation, Materials (Basel, Switzerland) 9 (6), 2016. https://doi.org/10.3390/ma9060483
 J. R. Mohanty, B. B. Verma, , P. K. Ray, D.R.K. Parhi, Prediction of mode-I overload-induced fatigue crack growth rates using neuro-fuzzy approach, Expert Systems with Applications 37 (4), 2010, pp. 3075–3087. https://doi.org/10.1016/j.eswa.2009.09.022
 J.P.M. Smeulers, R. Zeelen, A. Bos, PROMIS – A generic PHM methodology applied to aircraft subsystems, 6-3153-6-3159. https://doi.org/10.1109/AERO.2002.1036156
 A. Saxena, K. Goebel, PHM08 Challenge Data Set. NASA Ames Research Center, Moffett Field, CA, 2008, Available online at NASA Ames Prognostics Data Repository http://ti.arc.nasa.gov/project/prognostic-data-repository, checked on 7/19/2020.
 Emmanuel Ramasso, Abhinav Saxena, Performance Benchmarking and Analysis of Prognostic Methods for CMAPSS Datasets, International Journal of Prognostics and Health Management (5), 2014.
 Tianyi Wang, Jianbo Yu, David Siegel, Jay Lee, A similarity-based prognostics approach for Remaining Useful Life estimation of engineered systems, International Conference on Prognostics and Health Management, 2008, pp. 1–6. https://doi.org/10.1109/PHM.2008.4711421
 Richard S. Sutton, Andrew G. Barto, Reinforcement learning. An introduction, MIT Press, Cambridge, 1998. https://doi.org/10.1109/TNN.1998.712192
 Yuxi Li, Deep Reinforcement Learning: An Overview, 2018, arXiv-ID: 1701.07274v6 [cs.LG]
 Volodymyr Mnih; Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare et al., Human-level control through deep reinforcement learning (518), Nature (7540), 2015, pp. 529–533.
 David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche et al., Mastering the game of Go with deep neural networks and tree search, Nature 529 (7587), 2016, pp. 484–489. https://doi.org/10.1038/nature16961
 José Antonio Martín H, Javier de Lope, Darío Maravall, The kNN-TD Reinforcement Learning Algorithm, J. Mira (Ed.): Methods and models in artificial and natural computation. A homage to Professor Mira’s scientific legacy International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2009, Santiago de Compostela, Spain, June 22-26, proceedings, part I. 1st ed. Berlin: Springer (Lecture notes in computer science, 5601), 2009, pp. 305–314.
 José Antonio Martín H, Javier de Lope, Darío Maravall, Robust high performance reinforcement learning through weighted k-nearest neighbors, Neurocomputing 74 (8), 2011, pp. 1251–1259. https://doi.org/10.1016/j.neucom.2010.07.027