Back to search
2206.13093

Learning Deep Input-Output Stable Dynamics

Ryosuke Kojima, Yuji Okamoto

wrongmedium confidence
Category
math.DS
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper’s Theorem 1 states and solves a QCQP with an h-term weight k/||∇V||^2 in the objective, yet its Appendix A proof explicitly optimizes a different cost where the h-term is halved (k/(2||∇V||^2)), enabling a common scaling y for both the PV-part of G and h; this yields the closed form in Theorem 1. Under the stated (unhalved) objective, that common scaling is generally suboptimal—KKT conditions require different scalings for G and h. Thus, as stated, Theorem 1 is incorrect, while the model’s KKT-based solution correctly identifies the true optimizer and clarifies when the paper’s formula becomes optimal (precisely when the h-term is halved or in degenerate cases). See the statement of (5) and Theorem 1 for the unhalved objective and solution, contrasted with Appendix A where the 1/2 appears in the h-term cost and in the block A used in the QCQP reduction (confirming the halved cost used in the proof) .

Referee report (LaTeX)

\textbf{Recommendation:} major revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The core idea—analytic, differentiable projection onto the HJ-feasible set for learning—is valuable. However, the main theorem as written optimizes an objective different from the one actually used in the proof, due to a missing factor 1/2 in the h-term of the objective. This discrepancy undermines the claimed optimality of the provided closed-form. With the coefficient corrected (halved), the result appears correct and aligns with the Appendix derivation; otherwise, the model’s KKT solution shows the need for different scalings. The paper should be revised to resolve this inconsistency and to clearly state when common scaling is optimal.