--- license: mit base_model: - zai-org/GLM-4.7 --- # Model Overview - **Model Architecture:** GLM-4.7 - **Input:** Text - **Output:** Text - **Supported Hardware Microarchitecture:** AMD MI350/MI355 - **ROCm:** 7.0 - **Operating System(s):** Linux - **Inference Engine:** [vLLM](https://docs.vllm.ai/en/latest/) - **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (V0.11.1) - **moe** - **Weight quantization:** MOE-only, OCP MXFP4, Static - **Activation quantization:** MOE-only, OCP MXFP4, Dynamic - **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup) This model was built with GLM-4.7 model by applying [AMD-Quark](https://quark.docs.amd.com/latest/index.html) for MXFP4 quantization. # Model Quantization The model was quantized from [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights and activations are quantized to MXFP4. **Quantization scripts:** ``` export CUDA_VISIBLE_DEVICES=0,1,2,3 export MODEL_DIR=zai-org/GLM-4.7 export output_dir=amd/GLM-4.7-MXFP4 exclude_layers="*self_attn* *mlp.gate lm_head *mlp.gate_proj *mlp.up_proj *mlp.down_proj" python3 quantize_quark.py --model_dir $MODEL_DIR \ --quant_scheme mxfp4 \ --num_calib_data 128 \ --exclude_layers $exclude_layers \ --model_export hf_format \ --output_dir $output_dir \ --multi_gpu ``` # Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend. ## Evaluation The model was evaluated on GSM8K benchmarks. ### Accuracy
| Benchmark | GLM-4.7 | GLM-4.7-MXFP4(this model) | Recovery |
| GSM8K (strict-match) | 94.16 | 93.86 | 99.68% |