Skip to content
Mobrief
Mobrief
Back to archive

Research · arXiv cs.LG

Demystifying OPD: Length Inflation and Stabilization Strategies for Large Language Models

On-policy distillation (OPD) trains student models under their own induced distribution while leveraging supervision from stronger teachers.

Apr 09, 2026 17:58 UTC · Paper: ~15 min · Research Source
Read original
  • LG identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data.
  • This truncation collapse coincides with abrupt repetition saturation and induces biased gradient signals, leading to severe training instability and sharp degradation in validation performance.
  • arXiv cs. LG attribute this problem to the interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts. To address this…

Context

LG identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data. This truncation collapse coincides with abrupt repetition saturation and induces biased gradient signals, leading to severe training instability and sharp degradation in validation performance. LG attribute this problem to the interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts. To address this issue, arXiv cs. LG proposes Stable OPD, a stabilized OPD framework that combines a reference-based divergence constraint with rollout mixture distillation. These together mitigate repetition-induced length inflation and further stabilize OPD training. Across multiple math reasoning datasets, arXiv cs. LG's approach prevents truncation collapse, stabilizes training dynamics, and improves performance by 7.2% on average.

LG identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data.

Paper PDF