2507.15702
Ubiquity of Uncertainty in Neuron Systems
Brandon B. Le, Bennett Lamb, Luke Benfer, Sriharsha Sambangi, Nisal Geemal Vismith, Akshaj Jagarapu
incompletemedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:57 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper demonstrates, via five coupled neuron-map models, the coexistence of chaotic and nonchaotic attractors and estimates 0 < u < 1 from final-state uncertainty, explicitly invoking d = n − u and showing nearly riddled basins for slow–fast systems; it also derives an Sb scaling that implies the ln(Sb)–ln(ε) slope agrees with u under an equiprobability assumption (Sb = s ln(NA) ε^u) . However, the paper does not prove that u can be made arbitrarily small (it gives a heuristic slow–fast explanation and numerical examples, not a general theorem) . The model’s Phase‑2 solution correctly formalizes items (i) and (iii) and cites standard results, but its claim that, under slow–fast structure, u can be tuned below any ε*>0 (via intermittency/blowout) requires additional, unproved hypotheses for these neuron maps. Hence both are incomplete with respect to the tunability claim, while they align on existence (0 < u < 1) and the basin‑entropy slope = u statement.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} A concise, well-executed Letter that documents final-state uncertainty across several neuron-map models, relates uncertainty exponent and basin entropy coherently, and proposes a clear qualitative mechanism (chance synchronization). The empirical evidence is strong, and the scope is relevant to nonlinear dynamics and computational neuroscience. Minor clarifications would improve methodological transparency and interpretation: quantify regression uncertainty, delineate assumptions behind the entropy scaling, and better separate heuristic from provable statements about near-riddled basins.