2407.07642
Machine learning of discrete field theories with guaranteed convergence and uncertainty quantification
Christian Offen
correcthigh confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper proves convergence of the constrained GP posterior means by casting them as unique minimizers in a uniformly convex, reflexive Banach space and upgrading weak to strong convergence; the candidate solution proves the same result directly in the RKHS setting via Fejér-monotone projections onto nested affine subspaces and the Kimeldorf–Wahba identification. The assumptions align and both arguments are valid; they differ mainly in technique and generality.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The work makes a careful, well-justified extension of GP/RKHS methods to identify discrete Lagrangian densities, with a correct and clean convergence proof. The paper is technically sound, appropriately cited, and the results are of interest to a specialized community in geometric numerical methods and data-driven PDE identification. Minor notational clarifications and a brief self-contained summary of the RKHS conditional-mean/minimizer equivalence would improve accessibility.