2501.15066
Discovering Dynamics with Kolmogorov–Arnold Networks: Linear Multistep Method-Based Algorithms and Error Estimation
Jintao Hu, Hongjiong Tian, Qian Guo
correcthigh confidence
- Category
- math.DS
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s main result (Theorem 4.1) shows that the minimizer f_NN of the augmented LMM–KAN loss satisfies |f_NN − f|_{2,h} < C κ2(A_h) (h^p + e_NN(k,G,N)) with an explicit two-layer B-spline KAN approximation term e_NN(k,G,N), and it identifies conditions under which κ2(A_h) is uniformly bounded, implying convergence as h→0 and network resolution increases. The definitions of the discretized LMM system Bh f⃗ = b⃗, the augmented system A_h f⃗ = [c; b], the augmented loss J_{a,h}(·), and the l2-seminorm |·|_{2,h} are all given and used to justify this bound via Lemma 4.1 (citing [9]) and Lemma 4.2 (a root condition on ph) . The e_NN expression follows from the B-spline KAN upper bound (Theorem 2.1 and a stated corollary) , leading to the precise e_NN(k,G,N) in Theorem 4.1 . The candidate’s solution reconstructs the same inequality by (i) stacking the LMM and auxiliary p-th order finite-difference residuals into a tall least-squares system A_h u_h ≈ [c; b], (ii) noting that the “truth” residual is O(h^p) entrywise by the LMM truncation error and auxiliary FDM accuracy , (iii) using minimality of the empirical minimizer to compare with a best-approximation competitor, and (iv) converting residual differences to value errors via σ_min(A_h), yielding a κ2(A_h) factor and the claimed bound after inserting the KAN uniform approximation error. This matches the paper’s logic (via Lemma 4.1 + the KAN bound) and conclusion. Minor gaps in the candidate’s writeup are that it implicitly assumes scaling properties of ||A_h|| and uniqueness/conditioning that the paper instead handles by citing Lemma 4.2 and by its precise construction of A_h and J_{a,h}; nonetheless, the proof strategy and result coincide with the paper’s theorem.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The manuscript integrates LMM-based discovery with a concrete two-layer B-spline KAN approximation bound to yield a practical composite error estimate and convergence criterion. The main theorem is well-motivated and technically sound, with dependencies clearly cited. Small clarifications would improve self-containment and readability, but the core contribution is correct and useful to specialists working at the interface of scientific machine learning and numerical analysis.