Overview of the OCNS as a “small model” for time-series predicting. (IMAGE)
Caption
(a) The framework of the OCNS is similar to that of an autoencoder. For an observed high-dimensional vector Xt, a latent delay vector Zt, comprised of the dynamics of a one-dimensional delay dynamical system zt, is constructed via the input weight A and OCN Φ through a delay-embedding scheme. The delay vector Zt corresponding to time t contains the latent temporal information from the delay dynamical system zt, which can topologically reconstruct all the dynamics of the original system Xt. With the output weight B, the original spatial information Xt of the original system can be recovered from Zt. The OCN Φ, which generates a delay dynamical system zt with D delays feedback in a single neuron-based fashion, is the core of the OCNS. (b) Derived from the solid theoretical foundation of delay dynamical systems and the delay embedding theorem, the information flow of the OCNS is dictated by the OCNS-based STI equations, which encompass both the primary and conjugate STI equations. Here, the researchers build the delay vector Zt+i = [zt+i-S+1, zt+i-S+2, …, zt+i]' ∈ℝS at time t + i, where i = 0,1,2, …, and S is the delay-embedding dimension. Specifically, the input weight A and OCN Φ transform the spatial information in the original attractor A into the temporal information of the delayed attractor N corresponding to the primary STI equation, while the conjugate STI equation represents the reconstruction and prediction of the original system constrained on attractor A from the delayed attractor N through the output weight B. In this way, the OCNS effectively consists of an RNN with one neuron and two linear layers.
Credit
©Science China Press
Usage Restrictions
Use with credit.
License
Original content