2208.03889
Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach
Saber Jafarpour, Alexander Davydov, Matthew Abate, Francesco Bullo, Samuel Coogan
correctmedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s Theorem 6.3 states four claims about an embedded implicit network, contraction of α-averaged iterations, and interval output bounds under the assumptions that Φ is monotone and 1-Lipschitz and that there exists η>0 with μ∞,[η]−1(W)<1; it also specifies the admissible range for α and refers elsewhere for the proof. The candidate solution reconstructs a detailed contraction-based proof using the weighted ∞-matrix measure, the Metzler/non-Metzler split, and mixed-monotone embedding, and derives precisely the items (1)–(4) in Theorem 6.3, including the role of α and the output bounds via [C]+/[C]−. This matches the paper’s statements and proof approach, which is explicitly grounded in non-Euclidean contraction and mixed-monotone embeddings (see the theorem statement and surrounding discussion). Therefore, the paper’s result and the candidate solution agree in content and technique; the paper omits the full proof in this PDF and cites prior work, while the candidate gives a complete proof in the same spirit.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The contribution clarifies robustness and verification for implicit neural networks via non-Euclidean contraction, providing tractable bounds on Lipschitz constants and inclusion functions and connecting them to an embedded mixed-monotone system. The theoretical conditions are standard yet meaningful, and the results are practically leveraged in training/verification pipelines. The current PDF defers several proofs to prior work; adding brief proof sketches (especially for Theorem 6.3) would improve self-containment.