khtsly commited on
Commit
57db17f
·
verified ·
1 Parent(s): 9e042ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ pipeline_tag: image-text-to-text
20
 
21
  Mini-Coder is build on top of Qwen3.5-9B model with Continual Pretraining (CPT), we feed ~500k high-quality curated luau samples to improves the luau coding tasks capability.
22
 
23
- We also inject over 14k samples from open-source of claude 4.6 distillations with a fews additional samples for Supervised-Finetuning (SFT) to improves the model reasnoning, We also see the average consumed tokens has drastically reduced.
24
 
25
  It's fine-tuned efficiently using LoRA (16-bit) and rsLoRA with Rank (r) set to 64 and Alpha (α) set to 128, ensuring strong adaptation and retention of new complex logic, it were trained specifically to handle up to 32,768 (32k) tokens of maximum output (recommended).
26
 
 
20
 
21
  Mini-Coder is build on top of Qwen3.5-9B model with Continual Pretraining (CPT), we feed ~500k high-quality curated luau samples to improves the luau coding tasks capability.
22
 
23
+ We also inject over 14k samples from open-source of claude 4.6 distillations with a fews additional samples for Supervised-Finetuning (SFT) to improves the model reasoning, We also see the average consumed tokens has drastically reduced.
24
 
25
  It's fine-tuned efficiently using LoRA (16-bit) and rsLoRA with Rank (r) set to 64 and Alpha (α) set to 128, ensuring strong adaptation and retention of new complex logic, it were trained specifically to handle up to 32,768 (32k) tokens of maximum output (recommended).
26