Back to search
2407.20597

Joint Diffusion Processes as an Inductive Bias in Sheaf Neural Networks

Ferran Hernandez Caralt, Guillermo Bernárdez Gil, Iulia Duta, Pietro Liò, Eduard Alarcón Cot

correctmedium confidence
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper introduces RiSNN with update Fu⊳e(t+1)=MLP(xe xu^T), states Proposition 3.5 asserting feature-wise rotation invariance under a global orthogonal rotation X↦XQ, and proves it by induction (Appendix B). The candidate solution proves the same claim directly by showing that S′eu=(x′e)(x′u)^T equals Seu because x′e=xeQ and x′u=xuQ, hence the MLP’s input is unchanged. The paper’s statement and intent are correct (Proposition 3.5; proof sketch in Appendix B), and the candidate provides a cleaner, one-step argument; the only omission in the candidate is the explicit induction across layers (handled in the paper by taking F(0)=Id). Thus, both are correct and essentially consistent, with different proof presentations. See Proposition 3.5 and its appendix proof for the paper’s version .

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The rotation-invariance claim is sound and practically relevant, and the construction is elegant. Minor clarifications about shapes and the action of the rotation would make the proof fully transparent. The empirical sections suggest usefulness without overclaiming. With small edits to the derivation and notational conventions, the work is ready for solid specialist venues.