2407.03924
TwinLab: a framework for data-efficient training of non-intrusive reduced-order models for digital twins
Maximilian Kannapinn, Michael Schäfer, Oliver Weeger
correctmedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper explicitly reports that adding the second training dataset yields “a 49% reduction in test error to Erms = 0.54 K” for ROM745+553 and shows Erms/K values including 1.05 K (base ROM745) and 0.54 K (ROM745+553), supporting the 49% figure; the model reproduced this exact calculation: (1.05−0.54)/1.05 ≈ 48.6% ≈ 49% . The paper also states that the ROM needs ≈0.10 s to predict one hour of real time, i.e., Sp ≈ 3.6×10^4; the model computed 3600/0.10 = 36,000, matching the paper’s number .
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions \textbf{Journal Tier:} specialist/solid \textbf{Justification:} The work clearly documents a data-efficient approach to training neural-ODE ROMs and demonstrates a meaningful error reduction and strong faster-than-real-time performance. The audited numerical claims are directly verifiable from the reported values. Minor clarifications on table mappings and speed-up definitions would enhance clarity and reproducibility.