abstract
- © 2021 Elsevier B.V.This study aims to develop stable learning laws (in the sense of Lyapunov) for a general class of multilayer recurrent neural network (RNN) non-parametric identifier. The application of control Lyapunov functions in the discrete-time domain ensures the ultimate boundedness for the identification error enforcing it to reach a boundary set related to the power of parametric uncertainties and the modeling error. This work presents a general algorithm to design RNNs with n layers: one input layer, n-2 hidden layers, and one output layer. Numerical simulations support the theoretical results showing the advantages of increasing the number of layers in an RNN structure. A first example identifies the unknown states of the Van Der Pol oscillator. Then, a second example demonstrates the behavior of an RNN in a third-order system with high nonlinear dynamics describing an ozonation system of a single contaminant. For both cases, the application of the developed learning laws succeeds in estimating the uncertain dynamics. Moreover, the numerical simulations show how the identification error decreases as the number of layers increases.