Back to search
2402.09234

Multi-Hierarchical Surrogate Learning for Structural Dynamical Crash Simulations Using Graph Convolutional Neural Networks

Jonas Kneifl, Jörg Fehr, Steven L. Brunton, J. Nathan Kutz

incompletemedium confidence
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

Part (A) is exactly the additive transfer-learning construction stated in the paper: the latent refinement and decoder refinement are given by z_{k}=Ψ_enc,ℓ(x_ℓ)+Ψ^*_{enc,k}(x_k) and x̆_k=U_k^ℓ x̆_ℓ+Ψ^*_{dec,k}(z_k), with the optional learnable upsampler variant Ψ_dec,k(z)=Θ_k^ℓ(Ψ_dec,ℓ(z))+Ψ^*_{dec,k}(z), all defined explicitly in the multi-hierarchical model description (Eq. (12), (13), (15) in the paper) . Part (B) uses the paper’s node-distance error e2 definition and simply relates it to the per-sample MSE on the original discretization; under the stated hypothesis E_k ≤ E_{k+1} − δ_k, the monotonicity claim follows immediately. The paper itself reports empirical monotone improvement across levels and qualitatively argues residual learning, but does not present a formal theorem or the δ_k hypothesis; hence, the model’s proof is correct while the paper’s argument is incomplete on this point .

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The method is well-motivated and clearly presented, with compelling empirical evidence that the hierarchy captures global dynamics on coarse levels and transfers residual learning to finer levels. The additive encoder/decoder refinements are rigorously defined and correct. The main shortcoming is the absence of a concise formal statement for the commonly highlighted monotone improvement across levels; adding a simple proposition (under a mild hypothesis) would close this gap. Minor clarifications on error units and averaging would further strengthen the paper’s rigor and reproducibility.