2211.08939
Augmented Physics-Informed Neural Networks (APINNs): A gating network-based soft domain decomposition methodology
Zheyuan Hu, Ameya D. Jagtap, George Em Karniadakis, Kenji Kawaguchi
correctmedium confidence
- Category
- math.DS
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s Theorem 5.2 states high-probability train-to-test bounds for APINN with a trainable gating network, featuring n^{-1/4} rates with complexity terms Ri(G) and Ri(Ej∘h) and a δ(G,E) union-bound factor; the model’s solution reproduces this via a standard Rademacher/covering-number route, controlling derivatives up to order two, using product and vector-contraction inequalities, and explaining the n^{-1/4} via a square-loss symmetrization step. The steps, assumptions (Assumption 5.2 on L), and complexity definitions R_i match those in the paper, with only minor presentational differences (e.g., the paper explicitly mentions truncation for covering arguments).
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The analysis provides credible, well-structured generalization bounds for APINN with a trainable gating network, integrating derivative-aware complexities and standard learning-theory tools. The main theorems conform to the APINN architecture and leverage the PDE operator assumptions naturally. Minor clarifications would further polish the presentation—especially around the precise squared-loss deviation step underlying the n\^{-1/4} rate and explicit activation smoothness assumptions—but the core results and techniques are correct and useful for the community.