Back to search
2409.19278

EXPLICIT CONSTRUCTION OF RECURRENT NEURAL NETWORKS EFFECTIVELY APPROXIMATING DISCRETE DYNAMICAL SYSTEMS

Chikara Nakayama, Tsuyoshi Yoneda

correcthigh confidence
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper’s Theorem 1 states that for a time series generated by a delay-coordinate map Φ with finite maximal Lyapunov exponent λ and a recurrence (recursivity) assumption, one can explicitly construct an RNN (using only past data) with N ≤ K L, achieving |ŷ(t) − y(t)| ≤ (2t+1) e^{λ t} sqrt(LC)/K for t ≥ 0 (stated and proved in the PDF: theorem statement and setup, the discretization/dictionary construction, the key error-accumulation estimate, and the explicit RNN defined via X, W := YX^{-1}, W_in, W_out, with ŷ ≡ y* shown by induction) . The candidate solution reproduces the final inequality but argues via universal approximation of the last coordinate and an “exact delay line,” then simply sets ε = sqrt(LC)/K with N ≤ KL. That replacement of discretization error by a uniform-approximation error of order 1/K, achieved with only O(KL) hidden units, is not justified without an approximation-rate guarantee linking network width M to error ε (UAT ensures density but gives no 1/K rate, and nothing tying M ≤ O(K) to ε = Θ(1/K)). The paper’s bound comes from a quantization/dictionary argument (not UAT), with a detailed propagation inequality; the model’s proof omits a critical rate argument and thus does not establish the claimed scaling with N ≤ KL. Hence: paper correct; model’s proof, as written, is incomplete on the key rate claim.

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

This short note gives a concrete, constructive RNN that approximates delay-coordinate dynamics using only past data and proves a forward error bound. The approach is simple and algebraic, complementing standard UAT-based arguments. I found the result correct and of niche interest. Minor clarifications (notation, the generic invertibility claim for X, and a brief comment on activation choices) would improve readability.