PSFT
Collection
PSFT+RL models • 10 items • Updated
This repository contains the weights for the model described in the paper Hybrid Policy Distillation for LLMs.
Hybrid Policy Distillation (HPD) is a framework for compressing large language models (LLMs) that reformulates knowledge distillation (KD) as a reweighted log-likelihood objective at the token level. It integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, demonstrating improved computational efficiency and final performance across diverse model families and scales.
If you find this work useful in your research, please cite:
@article{hong2024hybrid,
title={Hybrid Policy Distillation for LLMs},
author={Hong, Zhiwei and others},
journal={arXiv preprint arXiv:2604.20244},
year={2024}
}