[Request] Great work! Do you have plans to also create GLM-5.1-AWQ?
GLM-5.1 has been released https://huggingface.co/zai-org/GLM-5.1. Are you planning on also creating a AWQ version of this?
downloading it π₯Ή
Hey, sorry for the naive question but what caliberation data did you use - the default data is from pileeval as shown in the AutoAWQ library. Given that this model has a chat template did you any chat data (e.g. smoltalk) or did you just use the default AutoAWQ settings. This info would be immensely helpful.
Default caliberation data used in AutoAWQ: https://github.com/casper-hansen/AutoAWQ/blob/88e4c76b20755db275574e6a03c83c84ba3bece5/awq/models/base.py#L150
Hey, sorry for the naive question but what caliberation data did you use - the default data is from pileeval as shown in the AutoAWQ library. Given that this model has a chat template did you any chat data (e.g. smoltalk) or did you just use the default AutoAWQ settings. This info would be immensely helpful.
Default caliberation data used in AutoAWQ: https://github.com/casper-hansen/AutoAWQ/blob/88e4c76b20755db275574e6a03c83c84ba3bece5/awq/models/base.py#L150
Reference readme, using data-free quantization (no calibration dataset required).
Thanks for the clarification π