2509.12733
On the model attractor in a high-dimensional neural network dynamics of reservoir computing
Miki U. Kobayashi, Kengo Nakai, Yoshitaka Saiki, Natsuki Tsutsumi
incompletehigh confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:57 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper establishes (A)–(C) only empirically: it computes Lyapunov spectra across spectral radii ρ and observes that λ1 ≈ Λ1 for all tested ρ, that λ2 ≈ Λ2 for small ρ while for larger ρ one finds λ3 ≈ Λ2, and that exponents restricted to the two-dimensional manifold match the actual ones; it then summarizes these observations broadly (e.g., “for any spectral radius ρ”) without a proof or quantified hypotheses about training limits, manifold existence, or normal hyperbolicity . The candidate model gives a plausible mechanism via an invariant graph and C^1-conjugacy, but it relies on unproven assumptions (exact readout P∘Ψ=Id, an implicit bound ‖A‖≤ρ, and persistence of the manifold for all ρ) and does not verify the domination needed for persistence. Hence both are directionally consistent but incomplete.
Referee report (LaTeX)
\textbf{Recommendation:} major revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} Numerical evidence is strong and clearly supports a coherent geometric picture of reservoir dynamics vis-à-vis the Hénon map: a two-dimensional embedded manifold with restricted exponents matching the original ones and an interpretable reordering of the negative exponents as the spectral radius grows. However, several claims are stated globally (e.g., “for any ρ”) without proofs; the existence and smoothness (and normal hyperbolicity) of the manifold are not established; and dependencies on training length/regularization are not quantified. These points should be tempered or theoretically underpinned.