Back to search
2210.13235

Chaos Theory and Adversarial Robustness

Jonathan S. Kent

incompletemedium confidence
Category
Not specified
Journal tier
Note/Short/Other
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper asserts that a classifier’s certified radius can be approximated as the output-space distance to the nearest decision boundary divided by the model’s Adversarial Susceptibility Ψ̂, but offers only an example and no proof or explicit conditions; it also employs a nonstandard “modified Euclidean” norm and a dataset-averaged Ψ̂, not a per-input Lipschitz bound, for which no rigorous link to certification is given (see the definition of the modified norm and Ψ̂, and the statement “It is simply the distance to the nearest decision boundary, divided by the Adversarial Susceptibility” in Section 6.1 ). The model’s solution supplies a local-Lipschitz, linearization-based argument and recovers the same formula up to constants, but it depends on additional unverified assumptions (e.g., S(x) ≈ Ψ̂, bi-Lipschitz behavior, nondegenerate Jacobian). Thus, while the two align on the formula and the norm choice used in the paper, neither provides a complete, generally valid certification result without extra hypotheses; the paper is heuristic, and the model is conditional. The paper’s broader positioning—that it can “quickly and easily approximate the certified robustness radii”—is stated without proof or error bounds .

Referee report (LaTeX)

\textbf{Recommendation:} major revisions

\textbf{Journal Tier:} note/short/other

\textbf{Justification:}

The susceptibility metric and its empirical stability are intriguing, and the proposed shortcut for approximate certification is practically appealing. However, the central relationship between certified radius and d(x)/Ψ̂ is asserted rather than derived; rigorous conditions and error bounds are absent. To be publishable in a strong venue, the work should present precise assumptions and a theorem (or bounded-error guarantee) linking Ψ̂ to a local Lipschitz constant and showing when the approximation is tight, plus empirical calibration of approximation errors.