2508.10765
Memorisation and forgetting in a learning Hopfield neural network: bifurcation mechanisms, attractors and basins
Adam E. Essex, Natalia B. Janson, Rachel A. Norris, Alexander G. Balanov
correctmedium confidence
- Category
- Not specified
- Journal tier
- Strong Field
- Processed
- Sep 28, 2025, 12:57 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper studies a continuous-time Hopfield network with Hebbian learning, using the arctan activation F(x) = (2/π) arctan((λπ x)/2), and parameters N=81, g=0.3, A=30, B=300, λ=1.4, exactly as stated in their model; they report a pitchfork bifurcation at t=6.5, followed by pairs of saddle-node bifurcations (e.g., at t2=22.8783), and provide cross-sectional evidence that basin boundaries coincide with stable manifolds of index-1 saddles; they also argue that catastrophic forgetting arises when the weight trajectory crosses the same bifurcation manifolds in the opposite direction . The candidate solution gives a complementary rigorous sketch: establishing a strict Lyapunov function for the retrieval system, deriving a supercritical pitchfork via center-manifold reduction at the leading eigenvalue crossing, describing saddle-node creation on codimension-one fold manifolds, identifying basin boundaries as stable manifolds of useful saddles, and interpreting catastrophic forgetting as reverse fold crossings; it even produces an explicit formula yielding t_p ≈ 6.45 consistent with the paper’s t ≈ 6.5 under a simplifying symmetry assumption. The proofs are different in style: the paper is primarily computational/constructive with careful visualizations, while the model provides standard dynamical-systems derivations under generic hypotheses. No substantive contradictions were found; the model’s extra assumptions (e.g., equal off-diagonal weights for the closed-form t_p) should be noted as special-case scaffolding rather than claims about the paper’s exact schedule.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} strong field \textbf{Justification:} This manuscript provides a coherent, carefully executed study linking learning-induced bifurcations to memory formation and catastrophic forgetting in a high-dimensional continuous-time Hopfield network. The setup and parameterization are explicit, and the key events (pitchfork near t≈6.5, paired saddle-nodes, basin-boundary structure) are convincingly demonstrated with appropriate cross-sections and retrieval portraits. The contribution is methodological as well as substantive: treating learning time as a one-parameter path through weight space and slicing bifurcation manifolds to detect crossings scales to large networks and complements classical Hopfield energy arguments. Some claims (e.g., universal mechanism of forgetting) are supported by strong evidence rather than a complete general proof, but this is acknowledged. Clarifying the Lyapunov/energy rationale and normal-form expectations would strengthen the exposition and align even better with standard theory.