2412.10821
Graph Attention Hamiltonian Neural Networks: A Lattice System Analysis Model Based on Structural Learning
Ru Geng, Yixian Gao, Jian Zu, Hong-Kun Zhang
incompletemedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper informally asserts properties of the learned attention matrix A—most notably that symmetry of A implies an even-symmetric potential—without a precise definition of the trajectory-to-attention map or a proof. It also uses ad hoc thresholding and ratio inferences supported only by examples. The model solution supplies a concrete, identifiable map from trajectories to A and rigorously proves: (i) exact edge recovery under a gap; (ii) even-symmetric potential implies symmetric A, while the converse is false in general unless additional identifiability and modeling conditions hold; (iii) order-wise interaction ratios can be recovered on a 1D chain under normalization and shared shape assumptions. These results reconcile and formalize the paper’s empirical claims while correcting an over-strong converse claim.
Referee report (LaTeX)
\textbf{Recommendation:} major revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The paper compellingly demonstrates that an attention-like mechanism can extract interaction structure from trajectories across diverse Hamiltonian systems. However, central claims about the interpretability of the learned attention matrix—especially that a symmetric attention implies an even-symmetric potential—are not grounded in a precise trajectory-to-attention map or supporting proofs. The contribution would be considerably strengthened by formalizing the mapping, stating identifiability and normalization conditions, and proving the key implications that are currently asserted and illustrated only empirically.