2206.02986
On a framework of data assimilation for neuronal networks
Wenyong Zhang, Boyu Chen, Jianfeng Feng, Wenlian Lu
incompletemedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
Both results the paper claims (consistency of the hyperparameter MLE under R1–R4 and Hellinger–Lipschitz well-posedness of the posterior) are plausible and standard, but the paper’s proofs omit critical steps. In Theorem 1, the argument mixes limits in n and m, invokes WLLN with a vanishing (in m) mean without a diagonal/iterated limit, and applies Slutsky to a ratio whose denominator still depends on the unknown estimator; no localization/strong-convexity argument is given to ensure the optimizer lies near the truth. In Theorem 2, the proof drops model terms, assumes bounded densities, and never establishes a uniform lower bound on the evidence Z(y), which is necessary for the Hellinger bound. The candidate solution fills these gaps with standard, correct arguments (Hessian positivity near the truth; gradient control via Hellinger; explicit O_p(n^{-1/2})+O(ε(m)) rate; and a classical Z(y) lower bound via Markov and an integrability assumption).
Referee report (LaTeX)
\textbf{Recommendation:} major revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The paper states two natural results with practical importance in hierarchical data assimilation, but both proofs are currently incomplete. The consistency proof needs a localization/strong-convexity argument and a careful treatment of the double limit (n,m), while the Hellinger well-posedness proof must establish a uniform evidence lower bound and avoid unjustified likelihood simplifications. These are standard fixes within reach. With these corrections and clearer presentation, the work could be suitable for a specialist venue.