DASD-4B-Thinking - GGUF

This is a quantized GGUF version of Alibaba-Apsara/DASD-4B-Thinking created using llama.cpp.

Available Quantizations

Filename Quant Type Description
DASD-4B-Thinking.Q2_K.gguf Q2_K Smallest, significant quality loss
DASD-4B-Thinking.Q3_K_S.gguf Q3_K_S Very small, low quality
DASD-4B-Thinking.Q3_K_M.gguf Q3_K_M Very small, medium quality
DASD-4B-Thinking.Q3_K_L.gguf Q3_K_L Small, better quality than Q3_K_M
DASD-4B-Thinking.Q4_0.gguf Q4_0 Small, legacy format
DASD-4B-Thinking.Q4_1.gguf Q4_1 Small, legacy format with better accuracy
DASD-4B-Thinking.Q4_K_S.gguf Q4_K_S Small, good quality
DASD-4B-Thinking.Q4_K_M.gguf Q4_K_M Medium, balanced quality - recommended
DASD-4B-Thinking.Q5_0.gguf Q5_0 Medium, legacy format
DASD-4B-Thinking.Q5_1.gguf Q5_1 Medium, legacy format with better accuracy
DASD-4B-Thinking.Q5_K_S.gguf Q5_K_S Medium, good quality
DASD-4B-Thinking.Q5_K_M.gguf Q5_K_M Medium, high quality - recommended
DASD-4B-Thinking.Q6_K.gguf Q6_K Large, very high quality
DASD-4B-Thinking.Q8_0.gguf Q8_0 Large, near-lossless quality

Usage

With llama.cpp

./llama-cli -m DASD-4B-Thinking.Q4_K_M.gguf -p "Your prompt here"

With Ollama

ollama run hf.co/aashish1904/DASD-4B-Thinking-GGUF

Original Model


Original Model Card

DASD-4B-Thinking

Ali

GitHub 

Hugging Face 

Hugging Face 

Hugging Face 

Hugging Face 

πŸš€ Introduction

We release DASD-4B-Thinking, a compact yet capable 4B dense language model specialized in long chain-of-thought (Long-CoT) reasoning across mathematics, code generation, and scientific reasoning. DASD-4B-Thinking is post-trained from Qwen3-4B-Instruct-2507 (non-thinking student) and distilled from gpt-oss-120b (teacher) via a distribution-aligned sequence distillation pipeline, achieving strong long-cot reasoning performance with substantially fewer training samples (448K) than many existing larger models.

benchmark

πŸ“Š Performance

Model Data AIME24 AIME25 LiveCodeBench v5 LiveCodeBench v6 GPQA-D
Qwen3-4B-Thinking-2507 ❌ - 81.3 - 55.2 65.8
Qwen3-14B ❌ 79.3 70.4 63.5 - 64.0
Qwen3-32B ❌ 81.4 72.9 65.7 - 68.4
DeepSeek-R1-0528-Qwen3-8B ❌ 86.0 76.3 60.5 - 61.1
GLM-Z1-32B-0414 ❌ 80.8 63.6 59.1 - 66.1
GLM-Z1-9B-0414 ❌ 76.4 56.6 51.8 - 58.5
Mistral3-3B ❌ - 72.1 54.8 - 53.4
Mistral3-8B ❌ - 78.7 61.6 - 66.8
AM-thinking-v1 βœ… 85.3 74.4 70.3 - -
POLARIS-4B-Preview βœ… 81.2 79.4 - - -
OpenThoughts3-7B βœ… 69.0 53.3 51.7 - 53.7
Pai-DistillQwen-ThoughtY-4B βœ… 76.7 - - - 56.1
Pai-DistillQwen-ThoughtY-8B βœ… 76.7 - - - 62.1
NVIDIA-OpenReasoning-Nemotron-7B βœ… 84.7 78.2 63.9 - 61.4
NVIDIA-Nemotron-Ultra-253B βœ… 80.8 72.5 68.1 - 76.0
DASD-4B-Thinking (Ours) βœ… 88.5 83.3 69.3 67.5 68.4

πŸ’‘ Why DASD-4B-Thinking Matters

While the community rushes to build distilled reasoning model using massive datasets (often millions of samples), DASD-4B-Thinking proves that distribution alignment matters more than data quantity. It establishes a new baseline for data-efficient distillation, delivering flagship-level reasoning in a 4B model that can run on consumer hardware.

DASD-4B-Thinking democratizes the training recipe:

  • Open-Source Model: It achieves State-of-the-Art performance among open-source models of comparable scale and outperforms significantly larger models.

  • Extreme Data Efficiency: Achieves these results using only 448K training samples, an order of magnitude fewer than comparable efforts.

  • Novel pipeline: It presents a systematic reexamining of sequence-level distillation and introduces a novel distribution-aligned sequence distillation pipeline.

  • Open-Source Data: We release the Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b, allowing the community to reproduce our off-policy temperature-scheduled pipeline:

    • 105K Low-Temperature responses for stability (Stage 1).

    • 330K High-Temperature responses for diversity (Stage 2).

  • Proven Scalability: The exact same data recipe generalizes effectively to larger architectures, as demonstrated by our DASD-30B-A3B-Thinking-Preview (MoE), which achieves competitive performance without extra RL.

βš™οΈ Post-Training Pipeline

DASD-Thinking introduces a new paradigm of Distribution-Aligned Sequence Distillation. This represents an enhanced sequence-level distillation pipeline that incorporates Temperature-scheduled Learning, Divergence-aware Sampling, and Mixed-policy Distillation , achieving efficient capability transfer with a minimal amount of data (448K). Please refer to our report for more details.

DASD-Thinking training pipeline

⚑ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Alibaba-Apsara/DASD-4B-Thinking"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)

prompt = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
messages = [
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=81920,
)

output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print(content)

Note: We include the system prompt, as it was used during all training stages. To ensure consistent output quality, we recommend including the same system prompt during actual usage; otherwise, the model's responses may be affected.

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:
python -m sglang.launch_server --model-path Alibaba-Apsara/DASD-4B-Thinking --context-length 262144
  • vLLM:
vllm serve Alibaba-Apsara/DASD-4B-Thinking --max-model-len 262144

πŸ’‘Best Practices

To achieve optimal performance, we suggest using Temperature=1.0, TopP=1.0.

πŸ“œ Licence

The model weights are licensed under Apache 2.0 License.

⚠️ Limitation

While DASD-4B-Thinking demonstrates remarkable performance across mathematical, scientific, and coding benchmarks, it is currently limited by the absence of tool integration and function calling capabilities. Operating strictly within the text space, the model cannot interact with external interfaces such as code executors or APIs, which constrains its utility in agent-based workflows; however, future iterations aim to bridge this gap by integrating capabilities like knowledge retrieval and tool invocation to support more complex, interactive reasoning tasks.

πŸ“š Citation

DASD-Thinking is developed by Alibaba Cloud, as part of our mission to advance open, efficient, and trustworthy reasoning systems. If you find this work useful in your research or applications, please cite our technical report.

@article{yan2026dasd,
  title={Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning},
  author={Yan, Shaotian and Liu, Kaiyuan and Shen, Chen and Wang, Bing and Fan, Sinan and Zhang, Jun and Wu, Yue and Wang, Zheng and Ye, Jieping},
  year={2026},
  journal={arXiv preprint arXiv:2601.09088},
  url={https://arxiv.org/abs/2601.09088}
} 
    
@article{liu2025where,
  title={Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation},
  author={Liu, Kaiyuan and Yan, Shaotian and Miao, Rui and Wang, Bing and Shen, Chen and Zhang, Jun and Ye, Jieping},
  journal={arXiv preprint arXiv:2512.20908},
  year={2025}
}

We welcome collaboration, feedback, and community contributions to push the boundaries of what small models can reason aboutβ€”transparently and responsibly.

Downloads last month
464
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for aashish1904/DASD-4B-Thinking-GGUF

Quantized
(11)
this model

Papers for aashish1904/DASD-4B-Thinking-GGUF