DistilQwen
H100 BF16. 30B→1.7B/0.6B TKD. Three teachers. 15 models + DISC paper. 10K+ downloads. DOI: 10.57967/hf/8165 & 10.57967/hf/8194
Text Generation • 2B • Updated • 4.72k • 1Note First in the DistilQwen chain. Foundation for all downstream models.
reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT-GGUF
Text Generation • 2B • Updated • 1.4kNote Edge deployment of the full Instruct pipeline. Apache 2.0.
reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT
2B • Updated • 1.12kNote Second stage: distil → SFT on instruction-following data.
reaperdoesntknow/Qwen3-0.6B-Distilled-30B-A3B
Text Generation • 0.8B • Updated • 4.56kNote 50× compression: 30B → 0.6B. Smallest in the distil family.
reaperdoesntknow/Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT
Text Generation • 0.8B • Updated • 4.62k • 2Note Higher-entropy teacher distributions → richer student representations.
reaperdoesntknow/Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF
Text Generation • 0.8B • Updated • 1.44kNote mradermacher also auto-quantized this one — 420+ shadow downloads.
reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT
Text Generation • 2B • Updated • 5.32k • 1Note Different capability profile: hierarchical problem solving.
reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT-GGUF
Text Generation • 2B • Updated • 2.05kNote Coder pipeline quantized. F16/Q4/Q5/Q8.
reaperdoesntknow/DistilQwen3-1.7B-uncensored
Text Generation • 2B • Updated • 5.07kNote Uncensored base distillation. No alignment filtering.
reaperdoesntknow/TopologicalQwen
Text Generation • 2B • Updated • 5.77kNote TKD flagship. BV decomposition → jump detection → curriculum.
reaperdoesntknow/DiStil-Qwen3-1.7B-uncensored
2B • Updated • 1.87kNote DISC-informed distillation. Uncensored. Research-focused.
reaperdoesntknow/Disctil-Qwen3-1.7B
Text Generation • 2B • Updated • 4.56kNote DISC-refined. Discrepancy-aware training produces cleaner signal.
reaperdoesntknow/DistilQwen3-1.7B-uncensored-GGUF
2B • Updated • 1.85k • 1Note Community validated — third-party quantizations exist.
reaperdoesntknow/Qwen3-1.7B-Thinking-Distil
Text Generation • 2B • Updated • 5.65k • 1Note The most popular model. Thinking teacher = richest signal.
reaperdoesntknow/LFM2.5-1.2B-Distilled-SFT
Text Generation • 1B • Updated • 1.33kNote Proves TKD works across architecture families, not just within Qwen.
reaperdoesntknow/Discrepancy_Calculus
UpdatedNote Continuous Thought Dynamics — mathematical backbone of DualMind.