Back to search
2411.06311

When are dynamical systems learned from time series data statistically accurate?

Jeongjin Park, Nicole Tianjiao Yang, Nisha Chandramoorthy

correctmedium confidence
Category
Not specified
Journal tier
Strong Field
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper’s Theorem 1 shows that C1 strong generalization plus a typicality-of-shadowing assumption implies the empirical measures of F_nn-orbits converge in W1 to within ε of the physical measure with high probability. The proof invokes a shadowing proposition (Proposition 1) and then a triangle inequality argument to relate the neural empirical measure to that of a shadowing F-orbit and finally to μ . The candidate solution reproduces this logic, making the coupling explicit via a one-step shift and tracking the finite-horizon/n→∞ subtlety arising from the strong-generalization horizon n(ε0,ε1,m)→∞ as m→∞. Apart from minor index and notation differences (also present in the paper’s exposition of x^sh), the arguments agree on substance; the candidate’s bound W1≤ε0+L·ω(ε0,ε1)+D/n is a more explicit version of the δ/2 bound used in the paper’s proof, and the “limsup along horizons” clarification addresses an implicit step in the paper. The shadowing construction and assumptions (uniform hyperbolicity, Lipschitz dF, contraction mapping) are consistent with the paper’s Appendix A.1 proof sketch . Overall, both are correct and essentially the same argument structure.

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} strong field

\textbf{Justification:}

The submission bridges generalization in Neural ODEs with ergodic-theoretic statistical accuracy via shadowing. The core theorem is well-motivated, correctly proved under stated assumptions, and backed by numerics. Minor expository issues (limit along horizons, a small notation misalignment in the shadowing orbit, and an implicit shadowing-size modulus) are easily addressed and do not detract from correctness.