2410.17199
On the control of recurrent neural networks using constant inputs
Cyprien Tamekue, Ruiqi Chen, ShiNung Ching
correctmedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s Theorem III.18 establishes exactly the equivalence [Id − C_T(x0) φ(T)] B u = C_T(x0) (x1 − Φ_T(x0)) under the stated spectral/contraction hypotheses and proves both directions using the endpoint expansion (Proposition III.19) and Jacobian identities (Lemma III.9), together with an invertibility lemma (Lemma III.21). The candidate solution reproduces the same condition via the same ingredients, differing mainly by introducing S(T)=∫_0^T e^{-s A_T} ds as a compact notation and sketching the converse via reversing the steps. Minor issues include an imprecise intermediate identity that redundantly writes the original integral together with −φ(T) and a loose reference to Lemma III.9 for converting the kernel directly to S(T); however, these do not affect the final condition. Overall, both are correct and closely aligned.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The main theorem gives a clean, verifiable condition for constant-input controllability in nonlinear RNNs and is proved with careful use of flow representations and endpoint expansions. The argument is sound and of applied interest. A small notational typo (ζ vs. φ) and a few implicit steps should be clarified for maximal readability.