2404.19626
Machine Learning of Continuous and Discrete Variational ODEs with Convergence Guarantee and Uncertainty Quantification
Christian Offen
correctmedium confidence
- Category
- Not specified
- Journal tier
- Strong Field
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper proves convergence of GP-learned Lagrangians by identifying the posterior mean with the unique minimal-RKHS-norm function satisfying finitely many bounded linear constraints (Euler–Lagrange/DEL plus a gauge), and then showing the minimizers over nested affine sets converge strongly to the minimizer over the infinite-constraint set. This is formalized in Lemma 6.4 (boundedness of Φ(∞)), the construction of A(j),A(∞), and Proposition 6.7 (strong convergence), with the identification of conditional means via Theorem A.10; see Theorem 6.1 and its proof for the continuous case, and Theorem 6.9 for the discrete case . The candidate solution uses the same mathematical backbone (RKHS minimizers = GP posterior means) but proves convergence via Fejér monotonicity/Cauchy arguments for projections, rather than the paper’s weak-compactness/LSC route. It also correctly interprets the dynamics constraint as EL(L(∞))(x,ẋ,ẍ)=0 along ẍ=gref(x,ẋ), matching the paper’s statement. Both treatments assume the same boundedness and embedding conditions and impose the same normalization map ΦN (continuous: (∂L/∂ẋ(xb),L(xb)); discrete analog), as defined in Section 4 . Overall, both are correct; the proofs differ in the convergence step.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} strong field \textbf{Justification:} The paper provides a rigorous convergence theory for GP-based learning of Lagrangians, handling the intrinsic gauge ambiguity and enabling uncertainty quantification for linear observables. The mathematical framework is solid and the proofs are correct under clear assumptions. Minor notational inconsistencies could be improved, but they do not affect correctness.