Back to search
2506.09266

Improved error bounds for Koopman operator and reconstructed trajectories approximations with kernel-based methods

Diego Olguín, Axel Osses, Héctor Ramírez

incompletemedium confidence
Category
math.DS
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper’s main claims (Theorem 6 and Theorem 7) state O(N^{-1/2}) error rates for Koopman operator approximation and for lifted mean trajectories, under bounded kernels and a coercivity/injectivity constant c1 for CX, obtained via Hilbert-space Hoeffding bounds and a triangular-inequality argument in which Ran(CXX+) ⊆ Ran(CX) (from UH ⊆ H) plays a key role. However, two technical issues weaken the proof as written. First, in the concentration step (Proposition 6), the paper repeatedly bounds the operator norm of rank-one terms by √∥k∥∞ (e.g., ∥Φ(x) ⊗ Φ(x)∥ ≤ √∥k∥∞ and analogous expectations), which is not correct; the exact operator norm is ∥Φ(x)∥∥Φ(x)∥ = k(x,x) ≤ ∥k∥∞. This mis-scaling propagates into the tail parameter δ = 2 exp(−Nε^2/(8∥k∥∞)) and into constants that later depend on √∥k∥∞ rather than ∥k∥∞ (see the derivation leading to Proposition 6 and the use in Theorem 6) . Second, in the proof of Theorem 6 the chain-of-range inclusions needed to produce preimages under CX for empirical quantities is asserted but not justified: “ψ̃N ∈ HN = Ran(CN_X) = Ran(CN_XX+) ⊆ Ran(CXX+) ⊆ Ran(CX).” Neither Ran(CN_XX+) ⊆ Ran(CXX+) nor Ran(CN_X) = Ran(CN_XX+) is established, and both can fail without additional rank and surjectivity assumptions; nonetheless they are used to define ψ̂, ψ̂N with CXψ̂ = ψ̃ and CXψ̂N = ψ̃N (crucial to the bound) . There is also a notational mismatch in the lifting-back operator section where CXX is defined as E[Φ(X) ⊗ Φ(X)] despite mapping H → H̃; it should involve the tilde feature map (ϕ̃(X) ⊗ Φ(X)) to make B = CXX C−1_X a left inverse of Φ consistently (the text subsequently uses Kernel Bayes’ Rule as if this were the case) . The candidate model solution reaches the same O(N^{-1/2}) rates using resolvent identities and Hilbert-space concentration, but it contains a critical error: it treats CN_X as an invertible operator on the full H and deduces a bound like ∥(CN_X)−1∥ ≤ 2/c1 on the event ∥CN_X − CX∥ ≤ c1/2. Since CN_X is finite-rank, it cannot be invertible on H; this step (and the subsequent resolvent-based argument) is invalid without restricting to HN (and assuming the Gram matrix is invertible there). Hence, while both the paper’s and the model’s conclusions are plausible and align with known O(N^{-1/2}) behavior, each proof as written has gaps that need repair.

Referee report (LaTeX)

\textbf{Recommendation:} major revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The manuscript presents an O(N\^{-1/2}) bound for kernel-EDMD and a trajectory-level result with a lifting-back operator. While the structure is promising and the results align with expectations in the literature, several technical errors must be corrected: the operator norm of rank-one terms is underestimated; range equalities/inclusions used to construct preimages under CX are not justified; and the lifting-back covariance must use the Euclidean feature map to fix domain-codomain consistency. These appear reparable and would likely leave the advertised rates intact, but they require substantial revision for correctness and clarity.