2507.14467
Learning Stochastic Hamiltonian Systems via Stochastic Generating Function Neural Network
Chen Chen, Lijin Wang, Yanzhao Cao, Xupeng Cheng
incompletemedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper states (and cites) the standard theorem that a map defined implicitly by a type-1 stochastic generating function S(P,q,ω) is symplectic when ∂²(Pᵀq+S)/∂P∂q is invertible almost surely, but it does not supply a proof in the paper; it refers to prior literature (Theorem 3.1) instead. The candidate solution, by contrast, gives a complete and correct proof via exterior forms and a block-matrix check, under the same smoothness and invertibility assumptions. Hence the model’s proof is correct and fills in the missing details the paper omits.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The manuscript builds a neural approach that preserves symplectic structure by learning stochastic generating functions. The central geometric statement (type-1 generating function implies a symplectic map) is standard and correctly cited, but no proof is shown in the paper. For completeness and reader guidance, a short proof or appendix sketch should be added. Numerics convincingly support the approach. Hence, minor revisions focused on exposition and self-containment are appropriate.