Back to search
2412.18360

A universal reproducing kernel Hilbert space for learning nonlinear systems operators

Mircea Lazar

correctmedium confidence
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper proves density and completeness for a product RKHS of operators by importing an RBF universal-approximation theorem, then showing the RBF approximant is an RKHS interpolant and invoking best-approximation in the RKHS (see k⊗, K⊗, and interpolant identities, as well as the main Theorem 2 statements and proof steps) . The candidate solution instead gives a direct RKHS proof via density of kernel sections, projection, and reproducing bounds; it also notes K⊗ = Ku ⊗ Kx and projection=interpolant when the Gram is invertible, and constructs a complete ON system by Gram–Schmidt—arriving at the same conclusions through a different route. The paper’s proof has a small gap where it moves from data-dependent RBF scales to fixed (data-independent) kernel hyperparameters; this can be patched by standard universality results for fixed-bandwidth radial kernels, but it should be stated or cited explicitly. Overall, both are correct; the proofs are different.

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The paper provides a clear and computationally attractive product-RKHS framework for operator learning, establishing density and completeness under standard radial kernels. The line of proof via RBF operator universality is well motivated and consistent with the final claims, but one reduction (from data-dependent RBF scales to a single fixed-kernel representation) should be justified or cited explicitly. Clarifying the compact-domain quantifiers would also improve precision. With these small improvements, the contribution is solid for a specialist audience.