Files changed (1) hide show
  1. README.md +24 -16
README.md CHANGED
@@ -9,12 +9,10 @@ tags:
9
  library_name: transformers
10
  ---
11
 
12
- # SSD-Qwen3-30B-A3B-Instruct
13
 
14
  This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
15
 
16
- - **Base model:** [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507)
17
- - **Variant:** instruct
18
  - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
19
  - **Evaluation sampling:** temperature=0.9, top_p=0.8, top_k=20
20
 
@@ -23,6 +21,15 @@ This model was produced using **Simple Self-Distillation (SSD)**, a method that
23
  - They are not optimized Qwen releases.
24
  - They don't represent a broader open-source model strategy.
25
 
 
 
 
 
 
 
 
 
 
26
  ## Method
27
 
28
  SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
@@ -37,20 +44,21 @@ LiveCodeBench (%)
37
  | **+ SSD (this model)** | **55.3** (+12.9) | **71.6** (+18.1) | **54.3** (+8.5) | **70.7** (+12.0) |
38
 
39
  ## Paper
40
-
41
- **Embarrassingly Simple Self-Distillation Improves Code Generation**
42
-
43
- Ruixiang Zhang, Richard He Bai, Huangjie Zheng, Navdeep Jaitly, Ronan Collobert, Yizhe Zhang
44
-
45
- ## Usage
46
-
47
- ```python
48
- from transformers import AutoModelForCausalLM, AutoTokenizer
49
-
50
- model = AutoModelForCausalLM.from_pretrained("apple/SSD-Qwen3-30B-A3B-Instruct")
51
- tokenizer = AutoTokenizer.from_pretrained("apple/SSD-Qwen3-30B-A3B-Instruct")
52
  ```
53
 
 
54
  ## License
55
 
56
- This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SSD-Qwen3-30B-A3B-Instruct/blob/main/LICENSE).
 
9
  library_name: transformers
10
  ---
11
 
12
+ # SimpleSD-30B-instruct
13
 
14
  This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
15
 
 
 
16
  - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
17
  - **Evaluation sampling:** temperature=0.9, top_p=0.8, top_k=20
18
 
 
21
  - They are not optimized Qwen releases.
22
  - They don't represent a broader open-source model strategy.
23
 
24
+ ## Usage
25
+
26
+ ```python
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ model = AutoModelForCausalLM.from_pretrained("apple/SimpleSD-30B-instruct")
30
+ tokenizer = AutoTokenizer.from_pretrained("apple/SimpleSD-30B-instruct")
31
+ ```
32
+
33
  ## Method
34
 
35
  SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
 
44
  | **+ SSD (this model)** | **55.3** (+12.9) | **71.6** (+18.1) | **54.3** (+8.5) | **70.7** (+12.0) |
45
 
46
  ## Paper
47
+ [**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)
48
+
49
+ ```bibtex
50
+ @misc{zhang2026embarrassinglysimpleselfdistillationimproves,
51
+ title={Embarrassingly Simple Self-Distillation Improves Code Generation},
52
+ author={Ruixiang Zhang and Richard He Bai and Huangjie Zheng and Navdeep Jaitly and Ronan Collobert and Yizhe Zhang},
53
+ year={2026},
54
+ eprint={2604.01193},
55
+ archivePrefix={arXiv},
56
+ primaryClass={cs.CL},
57
+ url={https://arxiv.org/abs/2604.01193},
58
+ }
59
  ```
60
 
61
+
62
  ## License
63
 
64
+ This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SimpleSD-30B-instruct/blob/main/LICENSE).