Papers
arxiv:2603.24472

Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?

Published on Mar 25
· Submitted by
JeonghyeKim
on Mar 26
Authors:
,
,
,
,
,

Abstract

Self-distillation in large language models can degrade mathematical reasoning performance by suppressing uncertainty expression, particularly affecting out-of-distribution tasks.

AI-generated summary

Self-distillation has emerged as an effective post-training paradigm for LLMs, often improving performance while shortening reasoning traces. However, in mathematical reasoning, we find that it can reduce response length while degrading performance. We trace this degradation to the suppression of epistemic verbalization - the model's expression of uncertainty during reasoning. Through controlled experiments varying conditioning context richness and task coverage, we show that conditioning the teacher on rich information suppresses uncertainty expression, enabling rapid in-domain optimization with limited task coverage but harming OOD performance, where unseen problems benefit from expressing uncertainty and adjusting accordingly. Across Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct, we observe performance drops of up to 40%. Our findings highlight that exposing appropriate levels of uncertainty is crucial for robust reasoning and underscore the importance of optimizing reasoning behavior beyond merely reinforcing correct answer traces.

Community

the part that grabbed me is how richer teacher conditioning actively suppresses epistemic verbalization, shortening reasoning traces but hurting out-of-distribution performance. they show this with off-policy and on-policy distillation across varying epistemic density and task coverage, which makes me wonder if there's a sweet spot that preserves minimal uncertainty signals while still gaining in-domain efficiency. a small but telling detail is that conditioning on full solutions makes the student imitate a concise, confident style, and arXivLens's breakdown does a nice job unpacking that mechanism, though i'd love an ablation that fixes answer length while varying explicit uncertainty prompts. also curious how this would translate to settings with human feedback or multi-round instruction tuning—could a calibrated uncertainty penalty keep robustness without losing the practical gains?

We demand rigidly defined areas of doubt and uncertainty!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.24472
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.24472 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.24472 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.24472 in a Space README.md to link it from this page.

Collections including this paper 3