Files changed (1) hide show
  1. README.md +23 -15
README.md CHANGED
@@ -9,12 +9,10 @@ tags:
9
  library_name: transformers
10
  ---
11
 
12
- # SSD-Qwen3-4B-Instruct
13
 
14
  This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
15
 
16
- - **Base model:** [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)
17
- - **Variant:** instruct
18
  - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
19
  - **Evaluation sampling:** temperature=1.1, top_p=0.8, top_k=20
20
 
@@ -23,6 +21,15 @@ This model was produced using **Simple Self-Distillation (SSD)**, a method that
23
  - They are not optimized Qwen releases.
24
  - They don't represent a broader open-source model strategy.
25
 
 
 
 
 
 
 
 
 
 
26
  ## Method
27
 
28
  SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
@@ -38,19 +45,20 @@ LiveCodeBench (%)
38
 
39
  ## Paper
40
 
41
- **Embarrassingly Simple Self-Distillation Improves Code Generation**
42
-
43
- Ruixiang Zhang, Richard He Bai, Huangjie Zheng, Navdeep Jaitly, Ronan Collobert, Yizhe Zhang
44
-
45
- ## Usage
46
-
47
- ```python
48
- from transformers import AutoModelForCausalLM, AutoTokenizer
49
-
50
- model = AutoModelForCausalLM.from_pretrained("apple/SSD-Qwen3-4B-Instruct")
51
- tokenizer = AutoTokenizer.from_pretrained("apple/SSD-Qwen3-4B-Instruct")
 
52
  ```
53
 
54
  ## License
55
 
56
- This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SSD-Qwen3-4B-Instruct/blob/main/LICENSE).
 
9
  library_name: transformers
10
  ---
11
 
12
+ # SimpleSD-4B-instruct
13
 
14
  This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
15
 
 
 
16
  - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
17
  - **Evaluation sampling:** temperature=1.1, top_p=0.8, top_k=20
18
 
 
21
  - They are not optimized Qwen releases.
22
  - They don't represent a broader open-source model strategy.
23
 
24
+ ## Usage
25
+
26
+ ```python
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ model = AutoModelForCausalLM.from_pretrained("apple/SimpleSD-4B-instruct")
30
+ tokenizer = AutoTokenizer.from_pretrained("apple/SimpleSD-4B-instruct")
31
+ ```
32
+
33
  ## Method
34
 
35
  SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
 
45
 
46
  ## Paper
47
 
48
+ [**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)
49
+
50
+ ```bibtex
51
+ @misc{zhang2026embarrassinglysimpleselfdistillationimproves,
52
+ title={Embarrassingly Simple Self-Distillation Improves Code Generation},
53
+ author={Ruixiang Zhang and Richard He Bai and Huangjie Zheng and Navdeep Jaitly and Ronan Collobert and Yizhe Zhang},
54
+ year={2026},
55
+ eprint={2604.01193},
56
+ archivePrefix={arXiv},
57
+ primaryClass={cs.CL},
58
+ url={https://arxiv.org/abs/2604.01193},
59
+ }
60
  ```
61
 
62
  ## License
63
 
64
+ This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SimpleSD-4B-instruct/blob/main/LICENSE).