Hybrid Policy Distillation for LLMs

This repository contains the weights for the model described in the paper Hybrid Policy Distillation for LLMs.

Hybrid Policy Distillation (HPD) is a framework for compressing large language models (LLMs) that reformulates knowledge distillation (KD) as a reweighted log-likelihood objective at the token level. It integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, demonstrating improved computational efficiency and final performance across diverse model families and scales.

Resources

Citation

If you find this work useful in your research, please cite:

@article{hong2024hybrid,
  title={Hybrid Policy Distillation for LLMs},
  author={Hong, Zhiwei and others},
  journal={arXiv preprint arXiv:2604.20244},
  year={2024}
}
Downloads last month
224
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including wh-zhu/Qwen2.5-7B-PSFT-RL-DAPO-90

Paper for wh-zhu/Qwen2.5-7B-PSFT-RL-DAPO-90