Back to search
2503.11427

FlowKac: An Efficient Neural Fokker-Planck solver using Temporal Normalizing Flows and the Feynman-Kac Formula

Naoufal El Bekri, Lucas Drumetz, Franck Vermet

wrongmedium confidence
Category
math.DS
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

Both the paper and the model agree on the pathwise C^k differentiability of the stochastic flow under locally Lipschitz spatial derivatives (Theorem 4.2) and on using a Taylor expansion with o(||h||^k) remainder for a fixed Brownian realization, as stated in the paper and used for the stochastic sampling trick . They also agree that when the SDE is linear in the initial condition (e.g., the GBM-derived FlowKac SDE), the first-order Taylor expansion is exact because the flow is linear in x . However, in the GBM example the paper gives a Jacobian (and hence flow multiplier) with exponent ((−μ+3σ^2)/2)t, whereas solving dX̃_t = (−μ+2σ^2)X̃_t dt + σ X̃_t dW_t yields X̃_t(x)=x exp((−μ+3σ^2/2)t + σ W_t), i.e., exponent ((−2μ+3σ^2)/2)t. The paper’s μ-term is off by a factor of 1/2 (compare eq. (19) and the subsequent lines giving J and X̃) . Finally, the claim that exact agreement pθ ≡ pFK implies zero training loss directly follows from the squared-error objective used in the paper (their discrete loss, eq. (14)) .

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The submission develops a practically valuable acceleration for Feynman–Kac-based learning of Fokker–Planck solutions by exploiting pathwise differentiability of stochastic flows and Taylor expansions. The method is clearly described, broadly consistent with theory, and empirically validated. A small but concrete correctness issue (factor-of-two in the GBM example’s exponent) should be fixed. This correction is straightforward and does not alter the main contributions.