Bayesian Inference for Training of Long Short Term Memory Models in Chaotic Time Series Forecasting
Título de la revista
Orjuela Cañón, Alvaro David
ISSN de la revista
Título del volumen
For time series forecasting, obtaining models is based on the use of past observations from the same sequence. In those cases, when the model is learning from data, there is not an extra information that discuss about the quantity of noise inside the data available. In practice, it is necessary to deal with finite noisy datasets, which lead to uncertainty about the propriety of the model. For this problem, the employment of the Bayesian inference tools are preferable. A modified algorithm used for training a long-short term memory recurrent neural network for time series forecasting is presented. This approach was chosen to improve the forecasting of the original series, employing an implementation based on the minimization of the associated Kullback-Leibler Information Criterion. For comparison, a nonlinear autoregressive model implemented with a feedforward neural network was also presented. A simulation study was conducted to evaluate and illustrate results, comparing this approach with Bayesian neural-networks-based algorithms for artificial chaotic time-series and showing an improvement in terms of forecasting errors. © Springer Nature Switzerland AG 2019.
Brain , Feedforward neural networks , Forecasting , Inference engines , Long short-term memory , Recurrent neural networks , Time series , Time series analysis , Bayesian , Bayesian inference , Bayesian neural networks , Chaotic time series , Kullback-Leibler information , Modified algorithms , Nonlinear autoregressive model , Time series forecasting , Bayesian networks , Bayesian approximation , Nonlinear autoregressive models , Recurrent neural networks , Time series forecasting