2409.13654
A Novel Neural Filter to Improve Accuracy of Neural Network Models of Dynamic Systems
Parham Oveissi, Turibius Rozario, Ankit Goel
incompletemedium confidence
- Category
- math.DS
- Journal tier
- Note/Short/Other
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper introduces an EKF-style “neural filter” and empirically claims that its covariance remains bounded while the open-loop neural network diverges, but it provides no formal conditions or proofs for invertibility of the innovation covariance, monotonic covariance reduction, or boundedness; it only presents the recursion and experiments (see Eqs. (6)–(11) and the accompanying remarks, and the concluding claims) . The candidate solution supplies the missing hypotheses and standard linear-Gaussian proofs: S = CPC^T + R ≻ 0 when R ≻ 0, P+ = P − P C^T S^{-1} C P ⪯ P, and boundedness/convergence under detectability/stabilizability with divergence of open-loop when ρ(A) ≥ 1 and Q ≻ 0, which are textbook results.
Referee report (LaTeX)
\textbf{Recommendation:} major revisions \textbf{Journal Tier:} note/short/other \textbf{Justification:} The manuscript effectively demonstrates an EKF-style correction around a learned state model and shows compelling empirical improvements across several nonlinear systems. However, it lacks essential theoretical assumptions and guarantees. In particular, conditions ensuring innovation invertibility, posterior covariance monotonicity, and boundedness (or convergence) are not stated or proved; the claims are supported only numerically. Adding standard linear-Gaussian theorems and clarifying assumptions would substantially strengthen the paper's correctness and impact.